I install kubernetes external secrets with helm, on GKE.
GKE: 1.16.15-gke.6000 on asia-northeast1
helm app version 6.2.0
using Workload Identity as document written
For workload identity,the service account I bind as below command (my-secrets-sa#$PROJECT.iam.gserviceaccount.com) has SecretManager.admin role, which seems necessary for using google secrets manager
gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member "serviceAccount:$CLUSTER_PROJECT.svc.id.goog[$SECRETS_NAMESPACE/kubernetes-external-secrets]" my-secrets-sa#$PROJECT.iam.gserviceaccount.com
Workload identity looks set correctly, because checking service account in pod on GKE shows correct serviceaccount
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_workload_identity_on_a_new_cluster
create a pod in cluster and check auth in it. it shows my-secrets-sa#$PROJECT.iam.gserviceaccount.com
$ kubectl run -it --image google/cloud-sdk:slim --serviceaccount ksa-name --namespace k8s-namespace workload-identity-test
$ gcloud auth list
But even if creating externalsecret, externalsecret shows error
ERROR, 7 PERMISSION_DENIED: Permission 'secretmanager.versions.access' denied for resource 'projects/project-id/secrets/my-gsm-secret-name/versions/latest' (or it may not exist).
secret my-gsm-secret-name itself exist in secretmanager, so it should not "not exist".
Also permission must be correctly set by workload identity.
it's the externalsecret I defined.
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
backendType: gcpSecretsManager
projectId: my-gsm-secret-project
data:
- key: my-gsm-secret-name # name of the GCP secret
name: my-kubernetes-secret-name # key name in the k8s secret
version: latest # version of the GCP secret
property: value # name of the field in the GCP secret
Has anyone had similar problem before ?
Thank you
whole command
create a cluster with workload-pool.
$ gcloud container clusters create cluster --region asia-northeast1 --node-locations asia-northeast1-a --num-nodes 1 --preemptible --workload-pool=my-project.svc.id.goog
create kubernetes service account.
$ kubectl create serviceaccount --namespace default ksa
binding kubernetes service account & service account
$ gcloud iam service-accounts add-iam-policy-binding
--role roles/iam.workloadIdentityUser
--member "serviceAccount:my-project.svc.id.goog[default/ksa]"
my-secrets-sa#my-project.iam.gserviceaccount.com`
add annotation
$ kubectl annotate serviceaccount
--namespace default
ksa
iam.gke.io/gcp-service-account=my-secrets-sa#my-project.iam.gserviceaccount.com
install with helm
$ helm install my-release external-secrets/kubernetes-external-secrets
create external secret
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
backendType: gcpSecretsManager
projectId: my-gsm-secret-project
data:
- key: my-gsm-secret-name # name of the GCP secret
name: my-kubernetes-secret-name # key name in the k8s secret
version: latest # version of the GCP secret
property: value # name of the field in the GCP secret
$ kubectl apply -f excternal-secret.yaml
I noticed that I had used different kubernetes service account.
When installing helm, new kubernetes service account my-release-kubernetes-external-secrets was created, and service/pods must be working on this service account.
So I should bind my-release-kubernetes-external-secrets & google service account.
Now, it works well.
Thank you #matt_j #norbjd
Related
I am trying to assume a role from an eks container that has an IRSA role attached to it. however when I assume a role, I can see that the container is using the ec2 iam role instead or the IRSA role.
[I] ✦2 ➜ kubectl get sa web -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxx:role/web
assume role from inside the container
CREDENTIALS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "$ROLE_SESSION_NAME")
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::xxxxxxxxxx:assumed-role/eks-instance/role/i-02xxxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::yyyyyyyyyy:role/app
Cleaning up file based variables
aws sts get-caller-identity returns the ec2 iam role.
Introduction :
I am trying to deploy a RabbitMq Helm Chart to GKE, with my Gitlab CI/CD pipeline. The command I use to install my chart is:
helm upgrade --install rabbitmq --create-namespace --namespace kubi-app-main -f envs/main/rabbitmq/rabbitmq.yaml bitnami/rabbitmq
envs/rabbitmq/rabbitmq.yaml:
auth:
username: user
password: password
# The used vhost is default-vhost
extraConfiguration: |-
default_vhost = default-vhost
default_permissions.configure = .*
default_permissions.read = .*
default_permissions.write = .*
The Gitlab job first connect to GKE cluster with gcloud:
echo "$SERVICE_ACCOUNT_KEY" > key.json
gcloud auth activate-service-account --key-file=key.json
gcloud config set project project-kubi-app
gcloud container clusters get-credentials cluster-1 --zone europe-west9-a --project project-kubi-app
The Issue:
But the helm upgrade fails:
Error: roles.rbac.authorization.k8s.io is forbidden: User "kubiapp-cluster-sa#project-kubi-app.iam.gserviceaccount.com" cannot create resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "kubi-app-main": requires one of ["container.roles.create"] permission(s).
Checking the roles of the user (service account) on the project
gcloud projects get-iam-policy project-kubi-app --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members:kubiapp-cluster-sa#project-kubi-app.iam.gserviceaccount.com"
This will return ROLE roles/editor, meaning that my service account has an editor role on the project.
From what I understand, the service account kubiapp-cluster-sa#project-kubi-app.iam.gserviceaccount.com has the editor role on the project project-kubi-app.
BUT the service account that I am using can't create a role in the namespace kubi-app-main.
I don't understand the use of this role, but it's origin is from the RabbitMq Helm Chart.
From the RabbitMq Helm Chart:
...
# Source: rabbitmq/templates/rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-endpoint-reader
namespace: "kubi-app-main"
labels:
app.kubernetes.io/name: rabbitmq
helm.sh/chart: rabbitmq-10.1.8
app.kubernetes.io/instance: rabbitmq
app.kubernetes.io/managed-by: Helm
subjects:
- kind: ServiceAccount
name: rabbitmq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rabbitmq-endpoint-reader
...
---
EDIT:
I have changed my service account role to Owner and now it works, but I would like to know the role required to create other roles.
roles/editor allows you to create/update/delete resources for most/many services, but does not include the permission to perform any of those operations on roles in general. roles/owner, on the other hand, does as it essentially makes you an admin of (almost every) resource.
For GKE, the usual role required to create/modify/update roles within the cluster is roles/container.clusterAdmin. Check out GKE roles.
I use Autopilot on GKE. I've created some log based metrics that I'd like to use to scale up pods.
To begin with - I'm not sure if it's great idea - the metric is just number of records in DB to process... I have a feeling using logs to scale app might bring in some weird infinite loop or something....
Anyhow - I've tried entering logging.googleapis.com|user|celery-person-count as an external metric and got HPA cannot read metric value. Installed Stackdriver adapter but not too sure how to use it either.
GKE Autopilot clusters have Workload Identity enabled for consuming other GCP services, including Cloud Monitoring.
You'll want to follow the steps here in order to deploy the Custom Metrics Adapter on Autopilot clusters.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user "$(gcloud config get-value account)"
kubectl create namespace custom-metrics
kubectl create serviceaccount --namespace custom-metrics \
custom-metrics-stackdriver-adapter
gcloud iam service-accounts create GSA_NAME
gcloud projects add-iam-policy-binding PROJECT_ID \
--member "serviceAccount:GSA_NAME#PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/monitoring.viewer"
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[custom-metrics/custom-metrics-stackdriver-adapter]" \
GSA_NAME#PROJECT_ID.iam.gserviceaccount.com
kubectl annotate serviceaccount \
--namespace custom-metrics custom-metrics-stackdriver-adapter \
iam.gke.io/gcp-service-account=GSA_NAME#PROJECT_ID.iam.gserviceaccount.com
kubectl apply -f manifests/adapter_new_resource_model.yaml
Given that you've already deployed the adapter, you'll want to delete the deployment first, although you might just be able to run the steps starting at gcloud iam ...
You'll need to replace GSA_NAME with a name of your choosing and PROJECT_ID with your Google Cloud project ID.
Yes , Horizontal Pod Autoscaling can be done using metric values. If your application is running in Kubernetes then you must use a custom metric for Autoscaling and if your application is not running in Kubernetes then you must use External metric for Autoscaling.
A custom metric can be selected for any of the following:
A particular node, Pod, or any Kubernetes object of any kind,
including a CustomResourceDefinition (CRD).
The average value for a metric reported by all Pods in a Deployment.
Refer this documentation for detailed explanation of using custom
metric for Horizontal Pod Autoscaling.
Custom metrics
GitHub link:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user "$(gcloud config get-value account)"
resource : kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
Deploying application with metrics
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples.git
cd kubernetes-engine-samples/custom-metrics-autoscaling/direct-to-sd
HPA example
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: custom-metric-sd
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: custom-metric-sd
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metric:
name: custom-metric
target:
type: AverageValue
averageValue: 20
You can check out this link for more information.
if your workload running top of VM or not on K8s use the external metrics instead of custom.
Refer to the documentation on Custom and external metrics for autoscaling workloads .
When running (on GCP):
$ helm upgrade \
--values ./values.yaml \
--install \
--namespace "weaviate" \
"weaviate" \
weaviate.tgz
It returns;
UPGRADE FAILED
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in t
he namespace "kube-system"
UPDATE: based on solution
$ vim rbac-config.yaml
Add to the file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Run:
$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller --upgrade
Note: based on Helm v2.
tl;dr: Setup Helm with the appropriate authorization settings for your cluster, see https://v2.helm.sh/docs/using_helm/#role-based-access-control
Long Answer
Your experience is not specific to the Weaviate Helm chart, rather it looks like Helm is not setup according to the cluster authorization settings. Other Helm commands should fail with the same or a similar error.
The following error
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
means that the default service account in the kube-system namespace is lacking permissions. I assume you have installed Helm/Tiller in the kube-system namespace as this is the default if no other arguments are specified on helm init. Since you haven't created a specific Service Account for Tiller to use it defaults to the default service account.
Since you are mentioning that you are running on GCP, I assume this means you are using GKE. GKE by default has RBAC Authorization enabled. In an RBAC setting no one has any rights by default, all rights need to be explicitly granted.
The helm docs list several options on how to make Helm/Tiller work in an RBAC-enabled setting. If the cluster has the sole purpose of running Weaviate you can choose the simplest option: Service Account with cluster-admin role. The process described there essentially creates a dedicated service account for Tiller, and adds the required ClusterRoleBinding to the existing cluster-admin ClusterRole. Note that this effectively makes Helm/Tiller an admin of the entire cluster.
If you are running a multi-tenant cluster and/or want to limit Tillers permissions to a specific namespace, you need to choose one of the alternatives.
I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created