Create unprivileged RDS DB role using CDK - postgresql

I deploy AWS Aurora for Postgres using AWS CDK, which creates a cluster admin role, and makes its password available as a secret to other infrastructure, notably Lambdas. I'm looking for a way to also create an unprivileged role in the database, and then disseminate its login credentials to Lambdas etc., to eliminate the risk of accessing the database with superuser credentials by design.
CDK only seems to create a single user account, and from there IaC authors have to fend for themselves. How could CDK be adapted to this scenario?

The CDK itself - as all IaC tooling e.g. Terraform - manage the provisioning of infrastructure.
You essentially want to initialise your RDS instance & create a user/role within the database itself, which isn't naturally related to infrastructure provisioning and thus the CDK at all.
While this isn't inbuilt to the CDK, you can use AwsCustomResource to create the unprivileged role via a Lambda after the creation of the RDS database. Take a look at this official blog post titled Use AWS CDK to initialize Amazon RDS instances for some more information on how to get started.

Related

How to get IAM/service account used by juicefs to access GCS in GKE?

I'm using a juicefs-csi in GKE. I use postgre as meta-store and GCS as storage. The corresponding setting is as follow:
node:
# ...
storageClasses:
- name: juicefs-sc
enabled: true
reclaimPolicy: Retain
backend:
name: juicefs
metaurl: postgres://user:password#my-ec2-where-postgre-installed.ap-southeast-1.compute.amazonaws.com:5432/the-database?sslmode=disable
storage: gs
bucket: gs://my-bucket
# ...
According to this documentation, I don't have to specify access key/secret (like in S3).
But unfortunately, whenever I try to write anything to the mounted volume (with juicefs-sc storage class), I always get this error:
AccessDeniedException: 403 Caller does not have storage.objects.create access to the Google Cloud Storage object.
I believe it should be related to IAM role.
My question is, how could I know which IAM user/service account is used by juicefs to access GCS, so that I can assign a sufficient role to it?
Thanks in advance.
EDIT
Step by step:
Download juicefs-csi helm chart
Add values as described in the question, apply
Create a pod that mount from PV with juicefs-sc storage class
Try to read/write file to the mount point
Ok I misunderstood you at the beginning.
When you are creating GKE cluster you can specify which GCP Service Account will be used by this cluster, like below:
By Default it's Compute Engine default service account (71025XXXXXX-compute#developer.gserviceaccount.com) which is lack of a few Cloud Product permissions (like Cloud Storage, it has Read Only). It's even described in this message.
If you want to check which Service Account was set by default to VM, you could do this via
Compute Engine > VM Instances > Choose one of the VMs from this cluster > In details find API and identity management
So You have like 3 options to solve this issue:
1. During Cluster creation
In Node Pools > Security, you have Access scopes where you can add some additional permissions.
Allow full access to all Cloud APIs to allow access for all listed Cloud APIs
Set access for each API
In your case you could just use Set access for each API and change Storage to Full.
2. Set permissions with a Service Account
You would need to create a new Service Account and provide proper permissions for Compute Engine and Storage. More details about how to create SA you can find in Creating and managing service accounts.
3. Use Workload Identity
Workload Identity on your Google Kubernetes Engine (GKE) clusters. Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services.
For more details you should check Using Workload Identity.
Useful links
Configuring Velero - Velero is software for backup and restore, however steps 2 and 3 are mentioned there. You would just need to adjust commands/permissions to your scenario.
Authenticating to Google Cloud with service accounts

How to check existing users and groups in kubernetes cluster?

We can check the service accounts in Kubernetes Cluster. Likewise, Is it possible to check the existing users and groups of my Kubernetes cluster with Cluster Admin privileges. If yes then how ? If no then why ?
NOTE: I am using EKS
Posting this as a community wiki, feel free to edit and expand.
This won't answer everything, however there are some concepts and ideas.
In short words there's no easy way. It's not possible to do using kubernetes itself. Reason for this is:
All Kubernetes clusters have two categories of users: service accounts
managed by Kubernetes, and normal users.
It is assumed that a cluster-independent service manages normal users
in the following ways:
an administrator distributing private keys
a user store like Keystone or Google Accounts
a file with a list of usernames and passwords
In this regard, Kubernetes does not have objects which represent normal
user accounts. Normal users cannot be added to a cluster through an
API call.
Source
More details and examples from another answer on SO
As for EKS part which is mentioned, it should be done using AWS IAM in connection to kubernetes RBAC. Below articles about setting up IAM roles in kubernetes cluster. Same way it will be possible to find which role has cluster admin permissions:
Managing users or IAM roles for your cluster
provide access to other IAM users and roles
If another tool is used for identity managing, it should be used (e.g. LDAP)

Authenticate to K8s as specific user

I recently built an EKS cluster w/ Terraform. In doing that, I Created 3 different roles on the cluster, all mapped to the same AWS IAM Role (the one used to create the cluster)
Now, when I try to manage the cluster, RBAC seems to use the least privileged of these (which I made a view role) that only has read-only access.
Is there anyway to tell config to use the admin role instead of view?
I'm afraid I've hosed this cluster and may need to rebuild it.
Some into
You don't need to create a mapping in K8s for an IAM entity that created an EKS cluster, because by default it will be mapped to "system:masters" K8s group automatically. So, if you want to give additional permissions in a K8s cluster, just map other IAM roles/users.
In EKS, IAM entities are used authentication and K8s RBAC are for authorization purposes. The mapping between them is set in aws-auth configMap in kube-system namespace,
Back to the question
I'm not sure, why K8s may have mapped that IAM user to the least privileged K8s user - it may be the default behaviour (bug?) or due to the mapping record (for view perms) being later in the CM, so it just re-wrote the previous mapping.
Any way, there is no possibility to specify a K8s user to use with such mapping.
Also, if you used eksctl to spin up the cluster, you may try creating a new mapping as per docs, but not sure if that will work.
Some reading reference: #1, #2

spinnaker ecs account wont select

I create and an ecs account name linked to aws account name, enabled ecs. Now when creating a new server group and selecting ecs. Under the account section the ecs account name appears but it wont let me select it.
If you are using Spinnaker < 1.19.X then the AWS ECS provider depends on the AWS EC2 provider and the AWS IAM structure.
Please read: AWS Providers Overview to understand the AWs IAM structure that is required (AWS managing Account and AWS Managed accounts through AssumeRole action)
Then you can set up an AWS EC2 Provider following this easy to get started guide by armory
Finally Set the AWS ECS provider with the legacy instructions found at spinnaker.io
If you are using Spinnaker > 1.19.X then you must use AWS ECS Service linked roles
One very important step is tagging the AWS VPC subnets so that spinnaker can access them.

AWS RDS Audit logging CFN

How can one turn on audit logging for RDS via Cloudformation when we setup the RDS instance?
The only way I have seen so far is to setup the instance and then to modify it and check the Audit logging box to forward logs to CW. Can we do this for MySQL when we setup the instance and not having to modify it?
This is not directly available from cloudformation, you need to create a custom resource to enable the logs.
I have created a custom resource to enable logs after the DB instance is created. Here are the cloudformation template and the Boto3 script.
https://gist.github.com/sudharsans/ab950c43f2086801d19b016f73310832