How to create a multiple users in Bintami PostgreSQL - postgresql

I'm just wondering if there is any possibility to create multiple custom users with Bitnami PostgreSQL helm chart?
Can auth.username be used in values.yaml to create multiple custom users? How to assign passwords to users in this case?

I have not tried it myself, but the Bitanmi PostgreSQL helm chart has a section that allows you to run an initdb script. I belleve, you can use it to define additional users.
See here: https://github.com/bitnami/charts/tree/main/bitnami/postgresql#initialize-a-fresh-instance
Let us know if it worked :-)

Related

Need to create K8 env with driver name with credentioals

I have to deploy one of the docker image using postsqlDB , connection srting is like below , what is best
methode i can use.
"postgresql://username#host.name.svc.cluster.local?sslmode=require"
I have used env like below but not working,
-name : DB_ADDRESS
value: "postgresql://username#tcp(host.name.svc.cluster.local)?sslmode=require"
In past I had to create Postgresql DB. I would suggest you to use Helm chart link.
It gives you a lot of flexibility in configurations.

Can Grafana cluster use cockroachDB as its metadata DB?

I see that Grafana Cluster can use postgres or MySQL as its metadata DB.
Can it also use cockroachDB?
(In general, I'm looking for an HA solution for Grafana, where the DB is also HA)
Thanks,
Moshe
You might be interested in following along with this issue: https://github.com/grafana/grafana/issues/8900
There are a couple of problems that prevent it from working out of the box right now. A big one right now is that CockroachDB only has experimental support for altering data types of columns, which Grafana uses.

Why do you need to reference the IAM role in the COPY command on Redshift?

Regarding using the COPY command for populating a Redshift table with data from S3; I'm wondering if there is a reason for why you have to specify a role via its ARN which provides the permissions even though the Redshift cluster is already associated with that rule. This seems redundant to me but probably there is a reason for this. Hence my question.
This question arose upon reading the Redshift getting started guide; specifically regarding steps 2, 3 and 6.
It's not mandatory to reference an IAM Role when using the COPY command. This is one of several authorization methods available for the cluster to access external resources (e.g. files stored on S3). The reason for specifying the IAM_ROLE clause is to tell Redshift that this is the authorization method to use, you could alternatively specify ACCESS_KEY_ID/SECRET_ACCESS_KEY or CREDENTIALS.
https://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-access-permissions.html
The reason you need to add the ARN for a specific IAM role is that it's possible to add more than one role to a cluster.

Kubernetes: Databases & DB Users

We are planning to use Kube for Postgres deployments. Our applications will be microservices with separated schema (or logical database). For security sake, we'd like to have separate users for each schema/logical_db.
I suppose that the db/schema&user should be created by Kube, so the application itself does not need to have access to DB admin account.
In Stolon it seems there is just a possibility to create a single user and single database and this seems to be the case also for other HA Postgres charts.
Question: What is the preferred way in Microservices in Kube to create DB users?
When it comes to creating user, as you said, most charts and containers will have environment variables for creating a user at boot time. However, most of them do not consider the possibility of creating multiple users at boot time.
What other containers do is, as you said, have the root credentials in k8s secrets so they access the database and create the proper schemas and users. This does not necessarily need to be done in the application logic but, for example, using an init container that sets up the proper database for your application to run.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers
This way you would have a pod with two containers: one for your application and an init container for setting up the DB.

Convert Terraform Templates to Cloudformation Templates

I want to convert the existing terraform templates(hcl) to aws cloudformation templates(json/yaml).
I basically want to find security issues with these templates through CFN_NAG.
An approach that I have already tried was converting HCL to JSON and then passing the template to CFN_NAG but I received a failure since both the templates have different structure.
Can anyone please provide any suggestions here?
A rather convoluted way of achieving this is to use Terraform to stand-up actual AWS environments, and then to use AWS’s CloudFormer to extract CloudFormation templates (JSON or YAML) from what Terraform has built. At which point you can use cfn-nag.
CloudFormer has some limitations, in that not all AWS resources are currently supported (RDS Security Groups for example) , but it will get you all the basic AWS resources.
Don't forget to remove all the environments, including CloudFormer's, to minimise the cost.
You want to use static code analysis to find security issues in your Terraform setup.
Trying to converting Terraform to CloudFormation to later use cfn-nag is one way. However, there exist tools now that directly operate on the Terraform setup.
I would recommend to take a look at terrascan. It is built on terraform_validate.
https://github.com/bridgecrewio/checkov/ runs security scanning for both terraform and cloudformation