MongoDB credentials are not working with StatefulSet - mongodb

I have this sts:
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "mongo-benchmark"
spec:
serviceName: mongo-benchmark-headless
replicas: 1
selector:
matchLabels:
app: "mongo-benchmark"
template:
metadata:
labels:
app: "mongo-benchmark"
spec:
containers:
- name: "mongo-benchmark"
image: "mongo:5"
imagePullPolicy: "IfNotPresent"
env:
- name: "MONGO_INITDB_ROOT_USERNAME"
value: "admin"
- name: "MONGO_INITDB_ROOT_PASSWORD"
value: "admin"
ports:
- containerPort: 27017
name: "mongo-port"
volumeMounts:
- name: "mongo-benchmark-data"
mountPath: "/data/db"
volumes:
- name: "mongo-benchmark-data"
persistentVolumeClaim:
claimName: "mongo-benchmark-pvc"
Everything is deployed.
The root user's username and password is admin
But when I go to the pod terminal and execute these commands I get:
$ mongo
$ use admin
$ db.auth("admin", "admin")
Error: Authentication failed.
0
I can't even read/write from/to other databases.
For example:
$ mongo
$ use test
$ db.col.findOne({})
uncaught exception: Error: error: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { find: \"col\", filter: {}, limit: 1.0, singleBatch: true, lsid: { id: UUID(\"30788b3e-48f0-4ff0-aaec-f17e20c67bde\") }, $db: \"test\" }",
"code" : 13,
"codeName" : "Unauthorized"
}
I don't know where I'm doing wrong. Anyone knows how to authenticate?

Related

create postgresql users with ansible sub nested-list

I am trying to find the best YAML structure to maintain databases & roles/users) for Postgres using ansible, one of the structures I tested is:
---
- databases:
- name: database1
owner: postrgres
users:
- name: user1
pass: secret
priv: CONNECT,REPLICATION
- name: user2
pass: secret
priv: CONNECT
- name: database2
owner: postgres
users:
- name: user3
pass: secret
priv: CONNECT
- name: user2 <--- user previously created needs to either create users first implies
pass: secret
priv: CONNECT
But how could I loop and get only a list of users so that I could use them in:
- name: Create users
postgresql_user:
name: '{{ item.name }}'
password: '{{ item.pass }}'
I may split the YAML and have something like:
---
- postgres_users:
- user: user1
pass: secret
- name: user2
pass: secret
- postgres_databases:
- name: db1
owner: <user> | default('postgres')
users:
- user: user1
priv: XXX.YYY
- user: user2
- name: db2
owner: <user> | default('postgres')
users:
- user: user1
priv: ZZZ
- user: user2
priv: XXX
But still wondering how to use in the loop postgres_databases and from there only use users.
Any ideas/tips?
Given the first structure -- and assuming that there's a typo and that databases is not actually a member of a list -- you could write:
- name: create users
postgresql_user:
name: "{{ item.1.name }}"
password: "{{ item.1.pass }}"
loop: "{{ databases|subelements('users') }}"
loop_control:
label: "{{ item.1.name }}"
Here's a complete reproducer; I've wrapped the postgres_user call in a debug task so that I can run it locally:
- hosts: localhost
gather_facts: false
vars:
databases:
- name: database1
owner: postrgres
users:
- name: user1
pass: secret
priv: CONNECT,REPLICATION
- name: user2
pass: secret
priv: CONNECT
- name: database2
owner: postgres
users:
- name: user3
pass: secret
priv: CONNECT
- name: user2
pass: secret
priv: CONNECT
tasks:
- name: create users
debug:
msg:
postgresql_user:
name: "{{ item.1.name }}"
password: "{{ item.1.pass }}"
loop: "{{ databases|subelements('users') }}"
loop_control:
label: "{{ item.1.name }}"
This outputs:
TASK [create users] *********************************************************************************
ok: [localhost] => (item=user1) => {
"msg": {
"postgresql_user": {
"name": "user1",
"password": "secret"
}
}
}
ok: [localhost] => (item=user2) => {
"msg": {
"postgresql_user": {
"name": "user2",
"password": "secret"
}
}
}
ok: [localhost] => (item=user3) => {
"msg": {
"postgresql_user": {
"name": "user3",
"password": "secret"
}
}
}
ok: [localhost] => (item=user2) => {
"msg": {
"postgresql_user": {
"name": "user2",
"password": "secret"
}
}
}
The above will attempt to create user2 twice, but that should be okay; the second attempt won't make any changes because the user already exists. If you wanted a unique list of users you could do something like this:
- name: get unique list of users
set_fact:
all_users: "{{ databases|json_query('[].users[]')|unique }}"
- name: create users
debug:
msg:
postgresql_user:
name: "{{ item.name }}"
password: "{{ item.pass }}"
loop: "{{ all_users }}"
loop_control:
label: "{{ item.name }}"

Sidekiq failing to connect to postgresql database

I am attempting to deploy sidekiq as a sidecar container alongside Discourse and I am receiving the following error
2022-05-31T02:57:01.242Z pid=1 tid=cd1 WARN:
ActiveRecord::ConnectionNotEstablished: could not connect to server:
No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Both Sidekiq and Discourse uses the same bitnami docker image with the only difference is the Sidekiq container has a run file thats ran to start sidekiq. The postgreql server I am connecting to is an existing server and Discourse itself doesn't seem to have any issues connecting to it. I have looked at the run file for sidekiq and I don't think it's pulling the env variables properly. I have tried various different variable notations thinking it was a syntax issue. Below is the deployment I am using, Any insight would be greatly appreciated
containers:
- name: discourse
image: bitnami/discourse
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 90
periodSeconds: 90
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
ports:
- name: portone
containerPort: 3000
- name: porttwo
containerPort: 5432
- name: portthree
containerPort: 6379
volumeMounts:
- mountPath: "/bitnami/discourse"
name: discourse
- name: sidekiq
image: docker.io/bitnami/discourse
command: ["/opt/bitnami/scripts/discourse-sidekiq/run.sh"]
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
Hello you need to add one more command ./opt/bitnami/scripts/discourse-sidekiq/setup.sh in sidekiq container command.
e.g
containers:
- name: discourse
image: bitnami/discourse
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 90
periodSeconds: 90
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
ports:
- name: portone
containerPort: 3000
- name: porttwo
containerPort: 5432
- name: portthree
containerPort: 6379
volumeMounts:
- mountPath: "/bitnami/discourse"
name: discourse
- name: sidekiq
image: docker.io/bitnami/discourse
command:
- bash
- -c
- |
./opt/bitnami/scripts/discourse-sidekiq/setup.sh
./opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"

MongoDB replicaset external access - keep getting internal cluster names

I must be doing something terribly wrong. I have a replicaset configured using the MongoDB community operator, deployed in GKE, and exposed via LoadBalancers.
This replicaset has 3 members. I have defined the replicaSetHorizons like so:
replicaSetHorizons:
- mongo-replica: document-0.mydomain.com:30000
- mongo-replica: document-1.mydomain.com:30001
- mongo-replica: document-2.mydomain.com:30002
I then use mongosh from an external source (local computer outside of GKE) to connect:
mongosh "mongodb://<credentials>#document-0.mydomain.com:30000,document-1.mydomain.com:30001,document-2.mydomain.com:30002/admin?ssl=false&replicaSet=document"
I do not use SSL for now because I am testing this deployment. What I found is mongosh always returns this error:
MongoNetworkError: getaddrinfo ENOTFOUND document-0.document-svc.mongodb.svc.cluster.local
Can someone explain to me what I am doing wrong? Why is my internal clustername being given to mongosh to attempt the connection?
If I try to connect to a single member of the replicaset, the connection will succeed. If I run rs.conf(), I see the following (which looks correct??):
{
_id: 'document',
version: 1,
term: 1,
members: [
{
_id: 0,
host: 'document-0.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-0.mydomain.com:30000' },
secondaryDelaySecs: Long("0"),
votes: 1
},
{
_id: 1,
host: 'document-1.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-1.mydomain.com:30001' },
secondaryDelaySecs: Long("0"),
votes: 1
},
{
_id: 2,
host: 'document-2.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-2.mydomain.com:30002' },
secondaryDelaySecs: Long("0"),
votes: 1
}
],
protocolVersion: Long("1"),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId("62209784e8aacd8385db1609")
}
}
ReplicaSetHorizons feature does not work without using SSL/TLS certificates.
Quoting from Kubernetes Operator reference:
This method to use split horizons requires the Server Name Indication extension of the TLS protocol
In order to make this work, you need to include
TLS certificate
TLS key
CA key
TLS Certificate must contain DNS names of all your replica sets in Subject Alternative Name (SAN) section
There is a tutorial at operator github pages. You need to complete all steps, certificate issuance cannot be skipped.
Certificate resource (using cert-manager.io CRD)
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-manager-certificate
spec:
secretName: mongodb-tls
issuerRef:
name: ca-issuer
kind: Issuer
duration: 87600h
commonName: "*.document-svc.mongodb.svc.cluster.local"
dnsNames:
- "*.document-svc.mongodb.svc.cluster.local"
- "document-0.mydomain.com"
- "document-1.mydomain.com"
- "document-2.mydomain.com"
MongoDBCommunity resource excerpt
spec:
type: ReplicaSet
...
replicaSetHorizons:
- mongo-replica: document-0.mydomain.com:30000
- mongo-replica: document-0.mydomain.com:30001
- mongo-replica: document-0.mydomain.com:30002
security:
tls:
enabled: true
certificateKeySecretRef:
name: mongodb-tls
caConfigMapRef:
name: ca-config-map
Secret mongodb-tls will by of type tls and contain ca.crt, tls.crt and tls.key fields representing Certificate Authority certificate, TLS certificate and TLS key respectively.
ConfigMap ca-config-map will contain ca.crt field only
More info at: mongodb-operator-secure-tls

Using Pulumi and Azure, is there any API to create a SecretProviderClass without using yaml?

I'm trying to find a better way to solve this scenario than resorting to a yaml inside a pulumi.apply call (which has problems with preview apparently).
The idea here is (using Azure Kubernetes) to create a secret and then make it available inside a pod (nginx pod here just for test purposes).
The current code works, but is there an API that I'm missing?
Started to mess around with:
const foobar = new k8s.storage.v1beta1.CSIDriver("testCSI", { ...
but not really sure if it is the right path and if it is, what to put where to get the same effect.
Sidenote, no, I do not want to put secrets into environment variables. Although convenient they leak in the gui and logs and possibly more places.
const provider = new k8s.Provider("provider", {
kubeconfig: config.kubeconfig,
namespace: "default",
});
const secret = new keyvault.Secret("mysecret", {
resourceGroupName: environmentResourceGroupName,
vaultName: keyVaultName,
secretName: "just-some-secret",
properties: {
value: administratorLogin,
},
});
pulumi.all([environmentTenantId, keyVaultName, clusterManagedIdentityClientId])
.apply(([environmentTenantId, keyVaultName, clusterManagedIdentityClientId]) => {
let yammie = `apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "${clusterManagedIdentityClientId}"
keyvaultName: ${keyVaultName}
cloudName: ""
objects: |
array:
- |
objectName: just-some-secret
objectType: secret
tenantId: ${environmentTenantId}`;
const yamlConfigGroup = new k8s.yaml.ConfigGroup("test-secret",
{
yaml: yammie,
},
{
provider: provider,
dependsOn: [secret],
}
);
});
const deployment = new k8s.apps.v1.Deployment(
name,
{
metadata: {
labels: appLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: appLabels },
template: {
metadata: {
labels: appLabels,
},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }],
volumeMounts: [
{
name: "secrets-store01-inline",
mountPath: "/mnt/secrets-store",
readOnly: true,
},
],
},
],
volumes: [
{
name: "secrets-store01-inline",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: { secretProviderClass: "azure-kvname-system-msi" },
},
},
],
},
},
},
},
{
provider: provider,
}
);
SecretsProviderClass is a CustomResource which isn't typed because the fields can be anything you want.
const secret = new k8s.apiextensions.CustomResource("cert", {
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
namespace: "kube-system",
},
spec: {
provider: "azure",
secretObjects: [{
data: [{
objectName: cert.certificate.name,
key: "tls.key",
}, {
objectName: cert.certificate.name,
key: "tls.crt"
}],
secretName: "ingress-tls-csi",
type: "kubernetes.io/tls",
}],
parameters: {
usePodIdentity: "true",
keyvaultName: cert.keyvault.name,
objects: pulumi.interpolate`array:\n - |\n objectName: ${cert.certificate.name}\n objectType: secret\n`,
tenantId: current.then(config => config.tenantId),
}
}
}, { provider: k8sCluster.k8sProvider })
Note: the objects array might work with JSON.stringify, but I haven't yet tried that.
If you want to get strong typing for a card, you can use crd2pulumi

Kafka monitoring via JMX

I'm using Prometheus JMX Exporter to monitor Kafka. I've defined the following pattern rules in the JMX config file:
- pattern : kafka.server<type=(.+), name=(.+)PerSec\w*, topic=(.+)><>Count
name: kafka_server_$1_$2_total
labels:
topic: "$3"
- pattern: kafka.server<type=(.+), name=(.+)PerSec\w*><>Count
name: kafka_server_$1_$2_total
type: COUNTER
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+), partition=(.*)><>(Count|Value)
name: kafka_server_$1_$2
labels:
topic: "$3"
partition: "$4"
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
topic: "$3"
type: COUNTER
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
broker: "$4:$5"
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
- pattern: kafka.server<type=(.+), name=(.+)><>(Count|Value)
name: kafka_server_$1_$2
Now I'm having the following issue. When I send data to the topic in this way:
/bin/kafka-console-producer.sh --broker-list kafka-hostname:9092 --topic test1
The counter of the metric kafka_server_brokertopicmetrics_bytesin_total increases correctly.
When I try to send data by using the following code:
"use strict";
const envs = process.env;
const options = {
"metadata.broker.list": "kafka-hostname:9092",
"group.id": "kafka1",
topic: "test1",
key: "testKey"
};
const kafkesque = require("untubo")(options);
let count = 0;
const interval = setInterval(function() {
kafkesque.push({ hello: "world", count });
console.log("sent", count);
count++;
}, 500);
process.once("SIGINT", function() {
clearInterval(interval);
console.log("closing");
kafkesque.stop(() => {
console.log("closed");
});
});
In this case the metric doesn't change at all but I can receive the message in the consumer. I think there is something not configured properly in the pattern. Do you have any idea?