create postgresql users with ansible sub nested-list - postgresql

I am trying to find the best YAML structure to maintain databases & roles/users) for Postgres using ansible, one of the structures I tested is:
---
- databases:
- name: database1
owner: postrgres
users:
- name: user1
pass: secret
priv: CONNECT,REPLICATION
- name: user2
pass: secret
priv: CONNECT
- name: database2
owner: postgres
users:
- name: user3
pass: secret
priv: CONNECT
- name: user2 <--- user previously created needs to either create users first implies
pass: secret
priv: CONNECT
But how could I loop and get only a list of users so that I could use them in:
- name: Create users
postgresql_user:
name: '{{ item.name }}'
password: '{{ item.pass }}'
I may split the YAML and have something like:
---
- postgres_users:
- user: user1
pass: secret
- name: user2
pass: secret
- postgres_databases:
- name: db1
owner: <user> | default('postgres')
users:
- user: user1
priv: XXX.YYY
- user: user2
- name: db2
owner: <user> | default('postgres')
users:
- user: user1
priv: ZZZ
- user: user2
priv: XXX
But still wondering how to use in the loop postgres_databases and from there only use users.
Any ideas/tips?

Given the first structure -- and assuming that there's a typo and that databases is not actually a member of a list -- you could write:
- name: create users
postgresql_user:
name: "{{ item.1.name }}"
password: "{{ item.1.pass }}"
loop: "{{ databases|subelements('users') }}"
loop_control:
label: "{{ item.1.name }}"
Here's a complete reproducer; I've wrapped the postgres_user call in a debug task so that I can run it locally:
- hosts: localhost
gather_facts: false
vars:
databases:
- name: database1
owner: postrgres
users:
- name: user1
pass: secret
priv: CONNECT,REPLICATION
- name: user2
pass: secret
priv: CONNECT
- name: database2
owner: postgres
users:
- name: user3
pass: secret
priv: CONNECT
- name: user2
pass: secret
priv: CONNECT
tasks:
- name: create users
debug:
msg:
postgresql_user:
name: "{{ item.1.name }}"
password: "{{ item.1.pass }}"
loop: "{{ databases|subelements('users') }}"
loop_control:
label: "{{ item.1.name }}"
This outputs:
TASK [create users] *********************************************************************************
ok: [localhost] => (item=user1) => {
"msg": {
"postgresql_user": {
"name": "user1",
"password": "secret"
}
}
}
ok: [localhost] => (item=user2) => {
"msg": {
"postgresql_user": {
"name": "user2",
"password": "secret"
}
}
}
ok: [localhost] => (item=user3) => {
"msg": {
"postgresql_user": {
"name": "user3",
"password": "secret"
}
}
}
ok: [localhost] => (item=user2) => {
"msg": {
"postgresql_user": {
"name": "user2",
"password": "secret"
}
}
}
The above will attempt to create user2 twice, but that should be okay; the second attempt won't make any changes because the user already exists. If you wanted a unique list of users you could do something like this:
- name: get unique list of users
set_fact:
all_users: "{{ databases|json_query('[].users[]')|unique }}"
- name: create users
debug:
msg:
postgresql_user:
name: "{{ item.name }}"
password: "{{ item.pass }}"
loop: "{{ all_users }}"
loop_control:
label: "{{ item.name }}"

Related

Github Action environment secret can't be access

I want to do deployments with Github Action into 2 different AKS Cluster. I have set up repository level secrets and a prod environment level secret.
I have nested workflows:
name: Production Deployment
on:
workflow_dispatch:
push:
tags:
- prod/v.**
jobs:
deploy-prod-environment:
// HERE I CAN'T USE THE environment:prod, error from github
name: Production deployment
uses: XXXX/.github/workflows/step_deployment.yaml#master
with:
environment: prod
kubernetes_namespace: XXX
secrets:
REGISTRY_GITHUB_TOKEN: ${{ secrets.REGISTRY_GITHUB_TOKEN }}
CLIENT_ID: ${{ secrets.CLIENT_ID }}
CLIENT_SECRET: ${{ secrets.CLIENT_SECRET }}
SUBSCRIPTION_ID: ${{ secrets.SUBSCRIPTION_ID }}
TENANT_ID: ${{ secrets.TENANT_ID }}
CLUSTER_NAME: ${{ secrets.CLUSTER_NAME }}
RESOURCE_GROUP: ${{ secrets.RESOURCE_GROUP }}
GITHUB_ACTOR: ${{ github.actor }}
REGISTRY: ${{ secrets.REGISTRY }}
This is then the step_deployment.yaml
name: Deploy to a specific environment
on:
workflow_call:
inputs:
environment:
type: string
required: true
...
secrets:
REGISTRY_GITHUB_TOKEN:
required: true
CLIENT_ID:
required: true
CLIENT_SECRET:
required: true
SUBSCRIPTION_ID:
required: true
TENANT_ID:
required: true
GITHUB_ACTOR:
required: true
CLUSTER_NAME:
required: true
RESOURCE_GROUP:
required: true
REGISTRY:
required: true
jobs:
....
building the docker images
....
release:
name: 🚀Release
uses: XXX/.github/workflows/step_release.yaml#master
needs: [docker-image-builds ]
with:
environment: ${{ inputs.environment }}
sha: ${{ github.sha }}
kubernetes_namespace: ${{ inputs.kubernetes_namespace }}
secrets:
GITHUB_ACTOR: ${{ github.actor }}
CLIENT_ID: ${{ secrets.CLIENT_ID }}
CLIENT_SECRET: ${{ secrets.CLIENT_SECRET }}
SUBSCRIPTION_ID: ${{ secrets.SUBSCRIPTION_ID }}
TENANT_ID: ${{ secrets.TENANT_ID }}
CLUSTER_NAME: ${{ secrets.CLUSTER_NAME }}
RESOURCE_GROUP: ${{ secrets.RESOURCE_GROUP }}
and this is the step_release.yaml
name: Release
on:
workflow_call:
inputs:
environment:
required: true
type: string
sha:
required: true
type: string
kubernetes_namespace:
required: true
type: string
secrets:
CLIENT_ID:
required: true
CLIENT_SECRET:
required: true
SUBSCRIPTION_ID:
required: true
TENANT_ID:
required: true
GITHUB_ACTOR:
required: true
CLUSTER_NAME:
required: true
RESOURCE_GROUP:
required: true
jobs:
log-in-to-azure-and-deploy:
name: Login into Azure cluster and set the right context
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
steps:
- uses: actions/checkout#v3
- name: Azure login
id: login
uses: azure/login#v1.4.3
with:
creds: '{"clientId":"${{ secrets.CLIENT_ID }}","clientSecret":"${{ secrets.CLIENT_SECRET }}","subscriptionId":"${{ secrets.SUBSCRIPTION_ID }}","tenantId":"${{ secrets.TENANT_ID }}"}'
........
In the step_release.yaml I was able to specify on job level the environment and the protection works - it asks for a confirmation before deployment, this is perfect - but I can't get the secrets for PROD env, github says always, that I have no access to it inside the steps and in the main workflow it's always the repository level env vars.
How can I access there already the prod environment secrets?

MongoDB credentials are not working with StatefulSet

I have this sts:
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "mongo-benchmark"
spec:
serviceName: mongo-benchmark-headless
replicas: 1
selector:
matchLabels:
app: "mongo-benchmark"
template:
metadata:
labels:
app: "mongo-benchmark"
spec:
containers:
- name: "mongo-benchmark"
image: "mongo:5"
imagePullPolicy: "IfNotPresent"
env:
- name: "MONGO_INITDB_ROOT_USERNAME"
value: "admin"
- name: "MONGO_INITDB_ROOT_PASSWORD"
value: "admin"
ports:
- containerPort: 27017
name: "mongo-port"
volumeMounts:
- name: "mongo-benchmark-data"
mountPath: "/data/db"
volumes:
- name: "mongo-benchmark-data"
persistentVolumeClaim:
claimName: "mongo-benchmark-pvc"
Everything is deployed.
The root user's username and password is admin
But when I go to the pod terminal and execute these commands I get:
$ mongo
$ use admin
$ db.auth("admin", "admin")
Error: Authentication failed.
0
I can't even read/write from/to other databases.
For example:
$ mongo
$ use test
$ db.col.findOne({})
uncaught exception: Error: error: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { find: \"col\", filter: {}, limit: 1.0, singleBatch: true, lsid: { id: UUID(\"30788b3e-48f0-4ff0-aaec-f17e20c67bde\") }, $db: \"test\" }",
"code" : 13,
"codeName" : "Unauthorized"
}
I don't know where I'm doing wrong. Anyone knows how to authenticate?

Sidekiq failing to connect to postgresql database

I am attempting to deploy sidekiq as a sidecar container alongside Discourse and I am receiving the following error
2022-05-31T02:57:01.242Z pid=1 tid=cd1 WARN:
ActiveRecord::ConnectionNotEstablished: could not connect to server:
No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Both Sidekiq and Discourse uses the same bitnami docker image with the only difference is the Sidekiq container has a run file thats ran to start sidekiq. The postgreql server I am connecting to is an existing server and Discourse itself doesn't seem to have any issues connecting to it. I have looked at the run file for sidekiq and I don't think it's pulling the env variables properly. I have tried various different variable notations thinking it was a syntax issue. Below is the deployment I am using, Any insight would be greatly appreciated
containers:
- name: discourse
image: bitnami/discourse
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 90
periodSeconds: 90
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
ports:
- name: portone
containerPort: 3000
- name: porttwo
containerPort: 5432
- name: portthree
containerPort: 6379
volumeMounts:
- mountPath: "/bitnami/discourse"
name: discourse
- name: sidekiq
image: docker.io/bitnami/discourse
command: ["/opt/bitnami/scripts/discourse-sidekiq/run.sh"]
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
Hello you need to add one more command ./opt/bitnami/scripts/discourse-sidekiq/setup.sh in sidekiq container command.
e.g
containers:
- name: discourse
image: bitnami/discourse
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 90
periodSeconds: 90
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
ports:
- name: portone
containerPort: 3000
- name: porttwo
containerPort: 5432
- name: portthree
containerPort: 6379
volumeMounts:
- mountPath: "/bitnami/discourse"
name: discourse
- name: sidekiq
image: docker.io/bitnami/discourse
command:
- bash
- -c
- |
./opt/bitnami/scripts/discourse-sidekiq/setup.sh
./opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"

Using Pulumi and Azure, is there any API to create a SecretProviderClass without using yaml?

I'm trying to find a better way to solve this scenario than resorting to a yaml inside a pulumi.apply call (which has problems with preview apparently).
The idea here is (using Azure Kubernetes) to create a secret and then make it available inside a pod (nginx pod here just for test purposes).
The current code works, but is there an API that I'm missing?
Started to mess around with:
const foobar = new k8s.storage.v1beta1.CSIDriver("testCSI", { ...
but not really sure if it is the right path and if it is, what to put where to get the same effect.
Sidenote, no, I do not want to put secrets into environment variables. Although convenient they leak in the gui and logs and possibly more places.
const provider = new k8s.Provider("provider", {
kubeconfig: config.kubeconfig,
namespace: "default",
});
const secret = new keyvault.Secret("mysecret", {
resourceGroupName: environmentResourceGroupName,
vaultName: keyVaultName,
secretName: "just-some-secret",
properties: {
value: administratorLogin,
},
});
pulumi.all([environmentTenantId, keyVaultName, clusterManagedIdentityClientId])
.apply(([environmentTenantId, keyVaultName, clusterManagedIdentityClientId]) => {
let yammie = `apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "${clusterManagedIdentityClientId}"
keyvaultName: ${keyVaultName}
cloudName: ""
objects: |
array:
- |
objectName: just-some-secret
objectType: secret
tenantId: ${environmentTenantId}`;
const yamlConfigGroup = new k8s.yaml.ConfigGroup("test-secret",
{
yaml: yammie,
},
{
provider: provider,
dependsOn: [secret],
}
);
});
const deployment = new k8s.apps.v1.Deployment(
name,
{
metadata: {
labels: appLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: appLabels },
template: {
metadata: {
labels: appLabels,
},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }],
volumeMounts: [
{
name: "secrets-store01-inline",
mountPath: "/mnt/secrets-store",
readOnly: true,
},
],
},
],
volumes: [
{
name: "secrets-store01-inline",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: { secretProviderClass: "azure-kvname-system-msi" },
},
},
],
},
},
},
},
{
provider: provider,
}
);
SecretsProviderClass is a CustomResource which isn't typed because the fields can be anything you want.
const secret = new k8s.apiextensions.CustomResource("cert", {
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
namespace: "kube-system",
},
spec: {
provider: "azure",
secretObjects: [{
data: [{
objectName: cert.certificate.name,
key: "tls.key",
}, {
objectName: cert.certificate.name,
key: "tls.crt"
}],
secretName: "ingress-tls-csi",
type: "kubernetes.io/tls",
}],
parameters: {
usePodIdentity: "true",
keyvaultName: cert.keyvault.name,
objects: pulumi.interpolate`array:\n - |\n objectName: ${cert.certificate.name}\n objectType: secret\n`,
tenantId: current.then(config => config.tenantId),
}
}
}, { provider: k8sCluster.k8sProvider })
Note: the objects array might work with JSON.stringify, but I haven't yet tried that.
If you want to get strong typing for a card, you can use crd2pulumi

How to get kubernetes node name and IP address as dictionary in ansible?

I need to get node name and IP address of each node and then create dictionary object. I am able to get Kubernetes node list using below command
- hosts: k8s
tasks:
- name: get cluster nodes
shell: "kubectl get nodes -o wide --no-headers | awk '{ print $1 ,$7}'"
register: nodes
- debug: var=nodes
- set_fact:
node_data: {}
- name: display node name
debug:
msg: "name is {{item.split(' ').0}}"
with_items: "{{nodes.stdout_lines}}"
- set_fact:
node_data: "{{ node_data | combine ( item.split(' ').0 : { 'name': item.split(' ').0 , 'ip' : item.split(' ').1 }, recursive=True) }}"
with_items: "{{ nodes.stdout_lines }}"
- debug: var=node_data
I got below error:
FAILED! => {"msg": "template error while templating string: expected
token ',', got ':'. String: {{ node_data | combine ( item.split(' ').0
: { 'name':item.split(' ').0 , 'ip': item.split(' ').1 },
recursive=True) }}"}
Output of kubectl command given below
kubectl get nodes -o wide --no-headers | awk '{ print $1 ,$7}'
is as follows
> ip-192-168-17-93.ec2.internal 55.175.171.80
> ip-192-168-29-91.ec2.internal 3.23.224.95
> ip-192-168-83-37.ec2.internal 54.196.19.195
> ip-192-168-62-241.ec2.internal 107.23.129.142
How to get the nodename and ip address into dictionary object in ansible?
The first argument to the combine filter must be a dictionary. You're calling:
- set_fact:
node_data: "{{ node_data | combine ( item.split(' ').0 : { 'name': item.split(' ').0 , 'ip' : item.split(' ').1 }, recursive=True) }}"
with_items: "{{ nodes.stdout_lines }}"
You need to make that:
- set_fact:
node_data: "{{ node_data | combine ({item.split(' ').0 : { 'name': item.split(' ').0 , 'ip' : item.split(' ').1 }}, recursive=True) }}"
with_items: "{{ nodes.stdout_lines }}"
Note the new {...} around your first argument to combine. You might want to consider reformatting this task for clarity, which might make this sort of issue more obvious:
- set_fact:
node_data: >-
{{ node_data | combine ({
item.split(' ').0: {
'name': item.split(' ').0,
'ip': item.split(' ').1
},
}, recursive=True) }}
with_items: "{{ nodes.stdout_lines }}"
You could even make it a little more clear by moving the calls to item.split into a vars section, like this:
- set_fact:
node_data: >-
{{ node_data | combine ({
name: {
'name': name,
'ip': ip
},
}, recursive=True) }}
vars:
name: "{{ item.split(' ').0 }}"
ip: "{{ item.split(' ').1 }}"
with_items: "{{ nodes.stdout_lines }}"