MongoDB replicaset external access - keep getting internal cluster names - mongodb

I must be doing something terribly wrong. I have a replicaset configured using the MongoDB community operator, deployed in GKE, and exposed via LoadBalancers.
This replicaset has 3 members. I have defined the replicaSetHorizons like so:
replicaSetHorizons:
- mongo-replica: document-0.mydomain.com:30000
- mongo-replica: document-1.mydomain.com:30001
- mongo-replica: document-2.mydomain.com:30002
I then use mongosh from an external source (local computer outside of GKE) to connect:
mongosh "mongodb://<credentials>#document-0.mydomain.com:30000,document-1.mydomain.com:30001,document-2.mydomain.com:30002/admin?ssl=false&replicaSet=document"
I do not use SSL for now because I am testing this deployment. What I found is mongosh always returns this error:
MongoNetworkError: getaddrinfo ENOTFOUND document-0.document-svc.mongodb.svc.cluster.local
Can someone explain to me what I am doing wrong? Why is my internal clustername being given to mongosh to attempt the connection?
If I try to connect to a single member of the replicaset, the connection will succeed. If I run rs.conf(), I see the following (which looks correct??):
{
_id: 'document',
version: 1,
term: 1,
members: [
{
_id: 0,
host: 'document-0.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-0.mydomain.com:30000' },
secondaryDelaySecs: Long("0"),
votes: 1
},
{
_id: 1,
host: 'document-1.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-1.mydomain.com:30001' },
secondaryDelaySecs: Long("0"),
votes: 1
},
{
_id: 2,
host: 'document-2.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-2.mydomain.com:30002' },
secondaryDelaySecs: Long("0"),
votes: 1
}
],
protocolVersion: Long("1"),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId("62209784e8aacd8385db1609")
}
}

ReplicaSetHorizons feature does not work without using SSL/TLS certificates.
Quoting from Kubernetes Operator reference:
This method to use split horizons requires the Server Name Indication extension of the TLS protocol
In order to make this work, you need to include
TLS certificate
TLS key
CA key
TLS Certificate must contain DNS names of all your replica sets in Subject Alternative Name (SAN) section
There is a tutorial at operator github pages. You need to complete all steps, certificate issuance cannot be skipped.
Certificate resource (using cert-manager.io CRD)
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-manager-certificate
spec:
secretName: mongodb-tls
issuerRef:
name: ca-issuer
kind: Issuer
duration: 87600h
commonName: "*.document-svc.mongodb.svc.cluster.local"
dnsNames:
- "*.document-svc.mongodb.svc.cluster.local"
- "document-0.mydomain.com"
- "document-1.mydomain.com"
- "document-2.mydomain.com"
MongoDBCommunity resource excerpt
spec:
type: ReplicaSet
...
replicaSetHorizons:
- mongo-replica: document-0.mydomain.com:30000
- mongo-replica: document-0.mydomain.com:30001
- mongo-replica: document-0.mydomain.com:30002
security:
tls:
enabled: true
certificateKeySecretRef:
name: mongodb-tls
caConfigMapRef:
name: ca-config-map
Secret mongodb-tls will by of type tls and contain ca.crt, tls.crt and tls.key fields representing Certificate Authority certificate, TLS certificate and TLS key respectively.
ConfigMap ca-config-map will contain ca.crt field only
More info at: mongodb-operator-secure-tls

Related

MongoDB credentials are not working with StatefulSet

I have this sts:
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "mongo-benchmark"
spec:
serviceName: mongo-benchmark-headless
replicas: 1
selector:
matchLabels:
app: "mongo-benchmark"
template:
metadata:
labels:
app: "mongo-benchmark"
spec:
containers:
- name: "mongo-benchmark"
image: "mongo:5"
imagePullPolicy: "IfNotPresent"
env:
- name: "MONGO_INITDB_ROOT_USERNAME"
value: "admin"
- name: "MONGO_INITDB_ROOT_PASSWORD"
value: "admin"
ports:
- containerPort: 27017
name: "mongo-port"
volumeMounts:
- name: "mongo-benchmark-data"
mountPath: "/data/db"
volumes:
- name: "mongo-benchmark-data"
persistentVolumeClaim:
claimName: "mongo-benchmark-pvc"
Everything is deployed.
The root user's username and password is admin
But when I go to the pod terminal and execute these commands I get:
$ mongo
$ use admin
$ db.auth("admin", "admin")
Error: Authentication failed.
0
I can't even read/write from/to other databases.
For example:
$ mongo
$ use test
$ db.col.findOne({})
uncaught exception: Error: error: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { find: \"col\", filter: {}, limit: 1.0, singleBatch: true, lsid: { id: UUID(\"30788b3e-48f0-4ff0-aaec-f17e20c67bde\") }, $db: \"test\" }",
"code" : 13,
"codeName" : "Unauthorized"
}
I don't know where I'm doing wrong. Anyone knows how to authenticate?

Get IP address from Azure Private Endpoint using Pulumi TypeScript API

I have a Private Endpoint created in my Azure subscription. If I look into the Azure Portal I can see that the private IP assigned to my Private Endpoint NIC is 10.0.0.4.
But how can I get the IP address value using the Pulumi TypeScript API, so I can use it in my scripts?
const privateEndpoint = new network.PrivateEndpoint("privateEndpoint", {
privateLinkServiceConnections: [{
groupIds: ["sites"],
name: "privateEndpointLink1",
privateLinkServiceId: backendApp.id,
}],
resourceGroupName: resourceGroup.name,
subnet: {
id: subnet.id,
}
});
export let ipc = privateEndpoint.networkInterfaces.apply(networkInterfaces => networkInterfaces[0].ipConfigurations)
console.log(ipc)
This is the current output for that ipc variable:
OutputImpl {
__pulumiOutput: true,
resources: [Function (anonymous)],
allResources: [Function (anonymous)],
isKnown: Promise { <pending> },
isSecret: Promise { <pending> },
promise: [Function (anonymous)],
toString: [Function (anonymous)],
toJSON: [Function (anonymous)]
}
You can't log a pulumi Output until its resolved, so if you change your code slightly, this will work:
const privateEndpoint = new network.PrivateEndpoint("privateEndpoint", {
privateLinkServiceConnections: [{
groupIds: ["sites"],
name: "privateEndpointLink1",
privateLinkServiceId: backendApp.id,
}],
resourceGroupName: resourceGroup.name,
subnet: {
id: subnet.id,
}
});
export let ipc = privateEndpoint.networkInterfaces.apply(networkInterfaces => console.log(networkInterfaces[0].ipConfigurations))
I found the solution to my problem.
I was publishing a Static Web App to an incorrect Private DNS Zone. Correct one should be privatelink.1.azurestaticapps.net.
Once that was fixed I got the data that I needed.
{
name: 'config1',
privateDnsZoneId: '/subscriptions/<subscription>/resourceGroups/rg-static-webappc0811aae/providers/Microsoft.Network/privateDnsZones/privatelink.1.azurestaticapps.net',
recordSets: [
{
fqdn: 'thankful-sand-084c7860f.privatelink.1.azurestaticapps.net',
ipAddresses: [Array],
provisioningState: 'Succeeded',
recordSetName: 'thankful-sand-084c7860f',
recordType: 'A',
ttl: 10
}
]
}
{
fqdn: 'thankful-sand-084c7860f.privatelink.1.azurestaticapps.net',
ipAddresses: [ '10.0.0.4' ],
provisioningState: 'Succeeded',
recordSetName: 'thankful-sand-084c7860f',
recordType: 'A',
ttl: 10
}

Using Pulumi and Azure, is there any API to create a SecretProviderClass without using yaml?

I'm trying to find a better way to solve this scenario than resorting to a yaml inside a pulumi.apply call (which has problems with preview apparently).
The idea here is (using Azure Kubernetes) to create a secret and then make it available inside a pod (nginx pod here just for test purposes).
The current code works, but is there an API that I'm missing?
Started to mess around with:
const foobar = new k8s.storage.v1beta1.CSIDriver("testCSI", { ...
but not really sure if it is the right path and if it is, what to put where to get the same effect.
Sidenote, no, I do not want to put secrets into environment variables. Although convenient they leak in the gui and logs and possibly more places.
const provider = new k8s.Provider("provider", {
kubeconfig: config.kubeconfig,
namespace: "default",
});
const secret = new keyvault.Secret("mysecret", {
resourceGroupName: environmentResourceGroupName,
vaultName: keyVaultName,
secretName: "just-some-secret",
properties: {
value: administratorLogin,
},
});
pulumi.all([environmentTenantId, keyVaultName, clusterManagedIdentityClientId])
.apply(([environmentTenantId, keyVaultName, clusterManagedIdentityClientId]) => {
let yammie = `apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "${clusterManagedIdentityClientId}"
keyvaultName: ${keyVaultName}
cloudName: ""
objects: |
array:
- |
objectName: just-some-secret
objectType: secret
tenantId: ${environmentTenantId}`;
const yamlConfigGroup = new k8s.yaml.ConfigGroup("test-secret",
{
yaml: yammie,
},
{
provider: provider,
dependsOn: [secret],
}
);
});
const deployment = new k8s.apps.v1.Deployment(
name,
{
metadata: {
labels: appLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: appLabels },
template: {
metadata: {
labels: appLabels,
},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }],
volumeMounts: [
{
name: "secrets-store01-inline",
mountPath: "/mnt/secrets-store",
readOnly: true,
},
],
},
],
volumes: [
{
name: "secrets-store01-inline",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: { secretProviderClass: "azure-kvname-system-msi" },
},
},
],
},
},
},
},
{
provider: provider,
}
);
SecretsProviderClass is a CustomResource which isn't typed because the fields can be anything you want.
const secret = new k8s.apiextensions.CustomResource("cert", {
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
namespace: "kube-system",
},
spec: {
provider: "azure",
secretObjects: [{
data: [{
objectName: cert.certificate.name,
key: "tls.key",
}, {
objectName: cert.certificate.name,
key: "tls.crt"
}],
secretName: "ingress-tls-csi",
type: "kubernetes.io/tls",
}],
parameters: {
usePodIdentity: "true",
keyvaultName: cert.keyvault.name,
objects: pulumi.interpolate`array:\n - |\n objectName: ${cert.certificate.name}\n objectType: secret\n`,
tenantId: current.then(config => config.tenantId),
}
}
}, { provider: k8sCluster.k8sProvider })
Note: the objects array might work with JSON.stringify, but I haven't yet tried that.
If you want to get strong typing for a card, you can use crd2pulumi

Shared MongoDb collection between two meteor apps deployed in amazon AWS not working.

I have deployed a meteor app to port 80 of my AWS instance using MUP. I deployed a second meteor app to port 3000, but this time omitting mongo setup and specifying mongo url in the mup.js file. The setup works fine and the second app is deployed but none of my publications seems to work. I have tried the same setup with two test Apps previously and it worked.
MUP.JS of App 1
module.exports = {
servers: {
one: {
host: 'IP',
username: 'ubuntu',
pem: 'path to my pem file'
}
},
meteor: {
name: 'Dashboard',
path: 'Path to my project',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
},
docker: {
image: 'abernix/meteord:base',
},
env: {
PORT: 80,
ROOT_URL: 'base url/',
MONGO_URL: 'mongodb://mongodb:27017/dbname'
},
deployCheckWaitTime: 320,
enableUploadProgressBar: true
},
mongo: {
oplog: true,
port: 27017,
servers: {
one: {},
},
},
};
MUP.JS of App 2
module.exports = {
servers: {
one: {
host: 'IP',
username: 'ubuntu',
pem: 'path to my pem file'
}
},
meteor: {
name: 'DashBoard2',
path: 'Path to my project',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
},
docker: {
image: 'abernix/meteord:base',
},
env: {
PORT: 3000,
ROOT_URL: 'base url/',
MONGO_URL: 'mongodb://mongodb:27017/dbname'
},
deployCheckWaitTime: 320,
enableUploadProgressBar: true
},
};
Are you certain the connection to the database has been made? Typically the Meteor console will give an error if it can't connect to the MongoDB database. Also not sure if you have omitted your connection string, but it should be;
mongodb://user:password#server:port/database

mongodb mongoose connection open callback doesnt get called

I have a MEAN project and this is a snippet from my server.js
var db = require('./config/db');
// url : 'mongodb://localhost/cdnserver'
// results are same for 127.0.0.1 or the machines ip
console.log(mongoose.connect(db.url));
mongoose.set('debug', true);
mongoose.connection.on('connected', function () {
console.log('Mongoose default connection open to ' + db.url);
});
// If the connection throws an error
mongoose.connection.on('error',function (err) {
console.log('Mongoose default connection error: ' + err);
});
// When the connection is disconnected
mongoose.connection.on('disconnected', function () {
console.log('Mongoose default connection disconnected');
});
This is a setup that has been working well for over 3 months now. Now I am replicating my whole MEAN stack along with database to other machine. I took a mongodump and did mongorestore. The restored db setup looks fine through mongo from terminal.
However, when starting the server, connection-open callback is not getting called. disconnected and error callbacks are getting called if I stop the mongodb service. How do I debug this further?
I am attaching the console output from both the setups.
Setup 1 :
Mongoose {
connections:
[ NativeConnection {
base: [Circular],
collections: {},
models: {},
replica: false,
hosts: null,
host: 'localhost',
port: 27017,
user: undefined,
pass: undefined,
name: 'cdnserver',
options: [Object],
otherDbs: [],
_readyState: 2,
_closeCalled: false,
_hasOpened: false,
_listening: false,
db: [Object] } ],
plugins: [],
models: {},
modelSchemas: {},
options: { pluralization: true } }
Server up on 80
Mongoose default connection open to mongodb://localhost/cdnserver
1
Mongoose: videos.find({}) { skip: 0, limit: 5, fields: undefined }
Setup 2:
Mongoose {
connections:
[ NativeConnection {
base: [Circular],
collections: {},
models: {},
replica: false,
hosts: null,
host: 'localhost',
port: 27017,
user: undefined,
pass: undefined,
name: 'cdnserver',
options: [Object],
otherDbs: [],
_readyState: 2,
_closeCalled: false,
_hasOpened: false,
_listening: false,
db: [Object] } ],
plugins: [],
models: {},
modelSchemas: {},
options: { pluralization: true } }
Server up on 80
1
1
Mongoose default connection disconnected
Mongoose default connection error: Error: connection closed
cat /var/log/mongodb/mongodb.log shows exactly the same output in both the machines.
Update 1: The setup started working properly out of blue and it stopped again. I am not able to figure out what is making this happen.
I figured it out finally, the new setup was using a newer version of nodejs.
When I moved to 6.x from 7.x it worked fine. I guess the mongoose, node 7, mongodb versions didnt go well together.