Exception in create user on ejabberd using Smack - xmpp

I have a client code on server which create user on ejabberd using admin login following is my code for creating user
public Response registerNewUser(NewUserInfo info){
logger.info("start : registerNewUser");
ConnectionConfiguration conf = new ConnectionConfiguration("IISD43", 5222);
con.connect();
con.login("admin","admin");
XMPPConnection con = new XMPPConnection(conf);
AccountManager accountManger = con.getAccountManager();
try {
System.out.println(accountManger.supportsAccountCreation());
accountManger.createAccount(info.getPhoneNumber(), "test"+info.getPhoneNumber());
return new Response(200,"User Registered");
} catch (XMPPException e) {
e.printStackTrace();
logger.error("failed to create new user for userInfo "+info.getPhoneNumber()+" "+e);
return new Response(400, "User Not Registered");
}catch(Exception e){
e.printStackTrace();
return new Response(400, "User Not Registered");
}finally{
logger.info("end : registerNewUser");
}
}
the SYSO prints True
Error :
forbidden(403)
at org.jivesoftware.smack.AccountManager.createAccount(AccountManager.java:240)
at com.notificationprocessor.impl.XMPPRequestManger.registerNewUser(XMPPRequestManger.java:52)
at com.notificationprocessor.controller.RequestManager.createNotificationClient(RequestManager.java:34)
My ejabbered is on Windows machine it yml file is
as follows
###
### ejabberd configuration file
###
###
### The parameters used in this configuration file are explained in more detail
### in the ejabberd Installation and Operation Guide.
### Please consult the Guide in case of doubts, it is included with
### your copy of ejabberd, and is also available online at
### http://www.process-one.net/en/ejabberd/docs/
### The configuration file is written in YAML.
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
### However, ejabberd treats different literals as different types:
###
### - unquoted or single-quoted strings. They are called "atoms".
### Example: dog, 'Jupiter', '3.14159', YELLOW
###
### - numeric literals. Example: 3, -45.0, .0
###
### - quoted or folded strings.
### Examples of quoted string: "Lizzard", "orange".
### Example of folded string:
### > Art thou not Romeo,
### and a Montague?
### =======
### LOGGING
##
## loglevel: Verbosity of log files generated by ejabberd.
## 0: No ejabberd log at all (not recommended)
## 1: Critical
## 2: Error
## 3: Warning
## 4: Info
## 5: Debug
##
loglevel: 4
##
## rotation: Describe how to rotate logs. Either size and/or date can trigger
## log rotation. Setting count to N keeps N rotated logs. Setting count to 0
## does not disable rotation, it instead rotates the file and keeps no previous
## versions around. Setting size to X rotate log when it reaches X bytes.
## To disable rotation set the size to 0 and the date to ""
## Date syntax is taken from the syntax newsyslog uses in newsyslog.conf.
## Some examples:
## $D0 rotate every night at midnight
## $D23 rotate every day at 23:00 hr
## $W0D23 rotate every week on Sunday at 23:00 hr
## $W5D16 rotate every week on Friday at 16:00 hr
## $M1D0 rotate on the first day of every month at midnight
## $M5D6 rotate on every 5th day of the month at 6:00 hr
##
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
##
## overload protection: If you want to limit the number of messages per second
## allowed from error_logger, which is a good idea if you want to avoid a flood
## of messages when system is overloaded, you can set a limit.
## 100 is ejabberd's default.
log_rate_limit: 100
##
## watchdog_admins: Only useful for developers: if an ejabberd process
## consumes a lot of memory, send live notifications to these XMPP
## accounts.
##
## watchdog_admins:
## - "admin#IISD43"
### ================
### SERVED HOSTNAMES
##
## hosts: Domains served by ejabberd.
## You can define one or several, for example:
## hosts:
## - "example.net"
## - "example.com"
## - "example.org"
##
hosts:
- "IISD43"
##
## route_subdomains: Delegate subdomains to other XMPP servers.
## For example, if this ejabberd serves example.org and you want
## to allow communication with an XMPP server called im.example.org.
##
## route_subdomains: s2s
### ===============
### LISTENING PORTS
##
## listen: The ports ejabberd will listen on, which service each is handled
## by and what options to start it with.
##
listen:
-
port: 5222
module: ejabberd_c2s
max_stanza_size: 65536
shaper: c2s_shaper
access: c2s
starttls: true
certfile: "C:\\Users\\IISU43\\AppData\\Roaming\\ejabberd\\conf\\server.pem"
## Custom OpenSSL options
##
## protocol_options:
## - "no_sslv3"
## - "no_tlsv1"
-
port: 5269
module: ejabberd_s2s_in
max_stanza_size: 131072
shaper: s2s_shaper
##
## ejabberd_service: Interact with external components (transports, ...)
##
## -
## port: 8888
## module: ejabberd_service
## access: all
## shaper_rule: fast
## ip: "127.0.0.1"
## hosts:
## "icq.example.org":
## password: "secret"
## "sms.example.org":
## password: "secret"
##
## ejabberd_stun: Handles STUN Binding requests
##
## -
## port: 3478
## transport: udp
## module: ejabberd_stun
##
## To handle XML-RPC requests that provide admin credentials:
##
## -
## port: 4560
## module: ejabberd_xmlrpc
-
port: 5280
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
# "/pub/archive": mod_http_fileserver
web_admin: true
http_poll: false
http_bind: true
## register: true
captcha: false
##
## s2s_use_starttls: Enable STARTTLS + Dialback for S2S connections.
## Allowed values are: false optional required required_trusted
## You must specify a certificate file.
##
## s2s_use_starttls: optional
##
## s2s_certfile: Specify a certificate file.
##
## s2s_certfile: "C:\\Users\\IISU43\\AppData\\Roaming\\ejabberd\\conf\\server.pem"
## Custom OpenSSL options
##
## s2s_protocol_options:
## - "no_sslv3"
## - "no_tlsv1"
##
## domain_certfile: Specify a different certificate for each served hostname.
##
## host_config:
## "example.org":
## domain_certfile: "C:\\Users\\IISU43\\AppData\\Roaming\\ejabberd\\conf\\example_org.pem"
## "example.com":
## domain_certfile: "C:\\Users\\IISU43\\AppData\\Roaming\\ejabberd\\conf\\example_com.pem"
##
## S2S whitelist or blacklist
##
## Default s2s policy for undefined hosts.
##
## s2s_policy: s2s
##
## Outgoing S2S options
##
## Preferred address families (which to try first) and connect timeout
## in milliseconds.
##
## outgoing_s2s_families:
## - ipv4
## - ipv6
## outgoing_s2s_timeout: 10000
### ==============
### AUTHENTICATION
##
## auth_method: Method used to authenticate the users.
## The default method is the internal.
## If you want to use a different method,
## comment this line and enable the correct ones.
##
auth_method: internal
##
## Store the plain passwords or hashed for SCRAM:
## auth_password_format: plain
## auth_password_format: scram
##
## Define the FQDN if ejabberd doesn't detect it:
## fqdn: "server3.example.com"
##
## Authentication using external script
## Make sure the script is executable by ejabberd.
##
## auth_method: external
## extauth_program: "/path/to/authentication/script"
##
## Authentication using ODBC
## Remember to setup a database in the next section.
##
## auth_method: odbc
##
## Authentication using PAM
##
## auth_method: pam
## pam_service: "pamservicename"
##
## Authentication using LDAP
##
## auth_method: ldap
##
## List of LDAP servers:
## ldap_servers:
## - "loalhost"
##
## Encryption of connection to LDAP servers:
## ldap_encrypt: none
## ldap_encrypt: tls
##
## Port to connect to on LDAP servers:
## ldap_port: 389
## ldap_port: 636
##
## LDAP manager:
## ldap_rootdn: "dc=example,dc=com"
##
## Password of LDAP manager:
## ldap_password: "******"
##
## Search base of LDAP directory:
## ldap_base: "dc=example,dc=com"
##
## LDAP attribute that holds user ID:
## ldap_uids:
## - "mail": "%u#mail.example.org"
##
## LDAP filter:
## ldap_filter: "(objectClass=shadowAccount)"
##
## Anonymous login support:
## auth_method: anonymous
## anonymous_protocol: sasl_anon | login_anon | both
## allow_multiple_connections: true | false
##
## host_config:
## "public.example.org":
## auth_method: anonymous
## allow_multiple_connections: false
## anonymous_protocol: sasl_anon
##
## To use both anonymous and internal authentication:
##
## host_config:
## "public.example.org":
## auth_method:
## - internal
## - anonymous
### ==============
### DATABASE SETUP
## ejabberd by default uses the internal Mnesia database,
## so you do not necessarily need this section.
## This section provides configuration examples in case
## you want to use other database backends.
## Please consult the ejabberd Guide for details on database creation.
##
## MySQL server:
##
## odbc_type: mysql
## odbc_server: "server"
## odbc_database: "database"
## odbc_username: "username"
## odbc_password: "password"
##
## If you want to specify the port:
## odbc_port: 1234
##
## PostgreSQL server:
##
## odbc_type: pgsql
## odbc_server: "server"
## odbc_database: "database"
## odbc_username: "username"
## odbc_password: "password"
##
## If you want to specify the port:
## odbc_port: 1234
##
## If you use PostgreSQL, have a large database, and need a
## faster but inexact replacement for "select count(*) from users"
##
## pgsql_users_number_estimate: true
##
## ODBC compatible or MSSQL server:
##
## odbc_type: odbc
## odbc_server: "DSN=ejabberd;UID=ejabberd;PWD=ejabberd"
##
## Number of connections to open to the database for each virtual host
##
## odbc_pool_size: 10
##
## Interval to make a dummy SQL request to keep the connections to the
## database alive. Specify in seconds: for example 28800 means 8 hours
##
## odbc_keepalive_interval: undefined
### ===============
### TRAFFIC SHAPERS
shaper:
##
## The "normal" shaper limits traffic speed to 1000 B/s
##
normal: 1000
##
## The "fast" shaper limits traffic speed to 50000 B/s
##
fast: 50000
##
## This option specifies the maximum number of elements in the queue
## of the FSM. Refer to the documentation for details.
##
max_fsm_queue: 1000
###. ====================
###' ACCESS CONTROL LISTS
acl:
##
## The 'admin' ACL grants administrative privileges to XMPP accounts.
## You can put here as many accounts as you want.
##
admin:
user:
- "admin": "IISD43"
##
## Blocked users
##
## blocked:
## user:
## - "baduser": "example.org"
## - "test"
## Local users: don't modify this.
##
local:
user_regexp: ""
##
## More examples of ACLs
##
## jabberorg:
## server:
## - "jabber.org"
## aleksey:
## user:
## - "aleksey": "jabber.ru"
## test:
## user_regexp: "^test"
## user_glob: "test*"
##
## Loopback network
##
loopback:
ip:
- "127.0.0.0/8"
##
## Bad XMPP servers
##
## bad_servers:
## server:
## - "xmpp.zombie.org"
## - "xmpp.spam.com"
##
## Define specific ACLs in a virtual host.
##
## host_config:
## "localhost":
## acl:
## admin:
## user:
## - "bob-local": "localhost"
### ============
### ACCESS RULES
access:
## Maximum number of simultaneous sessions allowed for a single user:
max_user_sessions:
all: 10
## Maximum number of offline messages that users can have:
max_user_offline_messages:
admin: 5000
all: 100
## This rule allows access only for local users:
local:
local: allow
## Only non-blocked users can use c2s connections:
c2s:
blocked: deny
all: allow
## For C2S connections, all users except admins use the "normal" shaper
c2s_shaper:
admin: none
all: normal
## All S2S connections use the "fast" shaper
s2s_shaper:
all: fast
## Only admins can send announcement messages:
announce:
admin: allow
## Only admins can use the configuration interface:
configure:
admin: allow
## Admins of this server are also admins of the MUC service:
muc_admin:
admin: allow
## Only accounts of the local ejabberd server can create rooms:
muc_create:
local: allow
## All users are allowed to use the MUC service:
muc:
all: allow
## Only accounts on the local ejabberd server can create Pubsub nodes:
pubsub_createnode:
local: allow
## In-band registration allows registration of any possible username.
## To disable in-band registration, replace 'allow' with 'deny'.
register:
all: allow
## Only allow to register from localhost
trusted_network:
loopback: allow
## Do not establish S2S connections with bad servers
## s2s:
## bad_servers: deny
## all: allow
## By default the frequency of account registrations from the same IP
## is limited to 1 account every 10 minutes. To disable, specify: infinity
## registration_timeout: 600
##
## Define specific Access Rules in a virtual host.
##
## host_config:
## "localhost":
## access:
## c2s:
## admin: allow
## all: deny
## register:
## all: deny
### ================
### DEFAULT LANGUAGE
##
## language: Default language used for server messages.
##
language: "en"
##
## Set a different default language in a virtual host.
##
## host_config:
## "localhost":
## language: "ru"
### =======
### CAPTCHA
##
## Full path to a script that generates the image.
##
## captcha_cmd: "C:\\Program Files\\ejabberd-15.07\\lib\\ejabberd-15.07\\priv\\tools\\captcha.sh"
##
## Host for the URL and port where ejabberd listens for CAPTCHA requests.
##
## captcha_host: "example.org:5280"
##
## Limit CAPTCHA calls per minute for JID/IP to avoid DoS.
##
## captcha_limit: 5
### =======
### MODULES
##
## Modules enabled in all ejabberd virtual hosts.
##
modules:
mod_adhoc: []
mod_announce: # recommends mod_adhoc
access: announce
## mod_blocking: [] # requires mod_privacy
mod_caps: []
mod_carboncopy: []
mod_configure: [] # requires mod_adhoc
mod_disco: []
## mod_echo: []
## mod_irc: []
mod_http_bind: []
## mod_http_fileserver:
## docroot: "/var/www"
## accesslog: "C:\\Program Files\\ejabberd-15.07\\logs\\access.log"
mod_last: []
mod_muc:
## host: "conference.#HOST#"
access: muc
access_create: muc_create
access_persistent: muc_create
access_admin: muc_admin
## mod_muc_log: []
mod_offline:
access_max_user_messages: max_user_offline_messages
## mod_ping: []
## mod_pres_counter:
## count: 5
## interval: 60
mod_privacy: []
mod_private: []
## mod_proxy65: []
mod_pubsub:
access_createnode: pubsub_createnode
## reduces resource comsumption, but XEP incompliant
ignore_pep_from_offline: true
## XEP compliant, but increases resource comsumption
## ignore_pep_from_offline: false
last_item_cache: false
plugins:
- "flat"
- "hometree"
- "pep" # pep requires mod_caps
mod_register:
##
## Protect In-Band account registrations with CAPTCHA.
##
## captcha_protected: true
##
## Set the minimum informational entropy for passwords.
##
## password_strength: 32
##
## After successful registration, the user receives
## a message with this subject and body.
##
welcome_message:
subject: "Welcome!"
body: |-
Hi.
Welcome to this XMPP server.
##
## When a user registers, send a notification to
## these XMPP accounts.
##
## registration_watchers:
## - "admin1#example.org"
##
## Only clients in the server machine can register accounts
##
ip_access: trusted_network
##
## Local c2s or remote s2s users cannot register accounts
##
## access_from: deny
access: register
mod_roster: []
mod_shared_roster: []
## mod_time: []
mod_vcard: []
mod_version: []
##
## Enable modules with custom options in a specific virtual host
##
## append_host_config:
## "localhost":
## modules:
## mod_echo:
## host: "mirror.localhost"
##
## Enable modules management via ejabberdctl for installation and
## uninstallation of public/private contributed modules
## (enabled by default)
##
allow_contrib_modules: true
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8
As far as i got from config file it allows everyone for account creation, donno whether I am completely right.
Any pointer on resolving error appreciated

Modify these changes to you ejabberd.yml file.
## In-band registration allows registration of any possible username.
## To disable in-band registration, replace 'allow' with 'deny'.
register:
- allow
## Only allow to register from network.
trusted_network:
- allow: all

Related

How to setup a mongodb grafana dashboard using helm bitnami/mongodb and kube-prometheus-stack

I have the helm chart mongodb installed on my k8s cluster (https://github.com/bitnami/charts/tree/master/bitnami/mongodb).
I also have kube-prometheus-stack installed on my k8s cluster. (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
I've setup a grafana dashboard for mongodb which should pull in data from a prometheus data source. (https://grafana.com/grafana/dashboards/2583 )
However, my grafana dashboard is empty with no data.
I'm wondering if i have not configured something with the helm chart properly. Please see the mongodb helm chart below.
mognodb chart.yml
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride
namespaceOverride: mongodb
image:
## Bitnami MongoDB registry
##
registry: docker.io
## Bitnami MongoDB image name
##
repository: bitnami/mongodb
## Bitnami MongoDB image tag
## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
##
tag: 4.4.1-debian-10-r13
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns on Bitnami debugging in minideb-extras-base
## ref: https://github.com/bitnami/minideb-extras-base
debug: false
## String to partially override mongodb.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override mongodb.fullname template
##
# fullnameOverride:
## Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## MongoDB architecture. Allowed values: standalone or replicaset
##
architecture: replicaset
## Use StatefulSet instead of Deployment when deploying standalone
##
useStatefulSet: false
## MongoDB Authentication parameters
##
auth:
## Enable authentication
## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
##
enabled: true
## MongoDB root password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
rootPassword: "<redacted>"
## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
# username: username
# password: password
# database: database
## Key used for replica set authentication
## Ignored when mongodb.architecture=standalone
##
replicaSetKey: <redacted>
## Existing secret with MongoDB credentials
## NOTE: When it's set the previous parameters are ignored.
##
# existingSecret: name-of-existing-secret
## Name of the replica set
## Ignored when mongodb.architecture=standalone
##
replicaSetName: rs0
## Enable DNS hostnames in the replica set config
## Ignored when mongodb.architecture=standalone
## Ignored when externalAccess.enabled=true
##
replicaSetHostnames: true
## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
enableIPv6: false
## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
directoryPerDB: false
## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
systemLogVerbosity: 0
disableSystemLog: false
## MongoDB configuration file for Primary and Secondary nodes. For documentation of all options, see:
## http://docs.mongodb.org/manual/reference/configuration-options/
## Example:
## configuration:
## # where and how to store data.
## storage:
## dbPath: /bitnami/mongodb/data/db
## journal:
## enabled: true
## directoryPerDB: false
## # where to write logging data
## systemLog:
## destination: file
## quiet: false
## logAppend: true
## logRotate: reopen
## path: /opt/bitnami/mongodb/logs/mongodb.log
## verbosity: 0
## # network interfaces
## net:
## port: 27017
## unixDomainSocket:
## enabled: true
## pathPrefix: /opt/bitnami/mongodb/tmp
## ipv6: false
## bindIpAll: true
## # replica set options
## #replication:
## #replSetName: replicaset
## #enableMajorityReadConcern: true
## # process management options
## processManagement:
## fork: false
## pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
## # set parameter options
## setParameter:
## enableLocalhostAuthBypass: true
## # security options
## security:
## authorization: disabled
## #keyFile: /opt/bitnami/mongodb/conf/keyfile
##
configuration: ""
## ConfigMap with MongoDB configuration for Primary and Secondary nodes
## NOTE: When it's set the arbiter.configuration parameter is ignored
##
# existingConfigmap:
## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Example:
## initdbScripts:
## my_init_script.sh: |
## #!/bin/bash
## echo "Do something."
initdbScripts: {}
## Existing ConfigMap with custom init scripts
##
# initdbScriptsConfigMap:
## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:
## Additional command line flags
## Example:
## extraFlags:
## - "--wiredTigerCacheSizeGB=2"
##
extraFlags: []
## Additional environment variables to set
## E.g:
## extraEnvVars:
## - name: FOO
## value: BAR
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
# extraEnvVarsCM:
## Secret with extra environment variables
##
# extraEnvVarsSecret:
## Annotations to be added to the MongoDB statefulset. Evaluated as a template.
##
annotations: {}
## Additional labels to be added to the MongoDB statefulset. Evaluated as a template.
##
labels: {}
## Number of MongoDB replicas to deploy.
## Ignored when mongodb.architecture=standalone
##
replicaCount: 1
## StrategyType for MongoDB statefulset
## It can be set to RollingUpdate or Recreate by default.
##
strategyType: RollingUpdate
## MongoDB should be initialized one by one when building the replicaset for the first time.
##
podManagementPolicy: OrderedReady
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Lables for MongoDB pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Annotations for MongoDB pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## MongoDB pods' priority.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## MongoDB pods' Security Context.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
enabled: true
fsGroup: 1001
## sysctl settings
## Example:
## sysctls:
## - name: net.core.somaxconn
## value: "10000"
##
sysctls: []
## MongoDB containers' Security Context (main and metrics container).
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## MongoDB containers' resource requests and limits.
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## MongoDB pods' liveness and readiness probes. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Custom Liveness probes for MongoDB pods
##
customLivenessProbe: {}
## Custom Rediness probes MongoDB pods
##
customReadinessProbe: {}
## Add init containers to the MongoDB pods.
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the MongoDB pods.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## extraVolumes and extraVolumeMounts allows you to mount other volumes on MongoDB pods
## Examples:
## extraVolumeMounts:
## - name: extras
## mountPath: /usr/share/extras
## readOnly: true
## extraVolumes:
## - name: extras
## emptyDir: {}
extraVolumeMounts: []
extraVolumes: []
## MongoDB Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: true
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
## Ignored when mongodb.architecture=replicaset
##
# existingClaim:
## PV Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
# storageClass: "-"
## PV Access Mode
##
accessModes:
- ReadWriteOnce
## PVC size
##
size: 50Gi
## PVC annotations
##
annotations: {}
## The path the volume will be mounted at, useful when using different
## MongoDB images.
##
mountPath: /bitnami/mongodb
## The subdirectory of the volume to mount to, useful in dev environments
## and one PV for multiple services.
##
subPath: ""
## Service parameters
##
service:
## Service type
##
type: ClusterIP
## MongoDB service port
##
port: 27017
## MongoDB service port name
##
portName: mongodb
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
nodePort: ""
## MongoDB service clusterIP IP
##
# clusterIP: None
## Specify the externalIP value ClusterIP service type.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
##
externalIPs: []
## Specify the loadBalancerIP value for LoadBalancer service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
##
# loadBalancerIP:
## Specify the loadBalancerSourceRanges value for LoadBalancer service types.
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
loadBalancerSourceRanges: []
## Provide any additional annotations which may be required. Evaluated as a template
##
annotations: {}
## External Access to MongoDB nodes configuration
##
externalAccess:
## Enable Kubernetes external cluster access to MongoDB nodes
##
enabled: true
## External IPs auto-discovery configuration
## An init container is used to auto-detect LB IPs or node ports by querying the K8s API
## Note: RBAC might be required
##
autoDiscovery:
## Enable external IP/ports auto-discovery
##
enabled: true
## Bitnami Kubectl image
## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
##
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.18.9-debian-10-r4
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## Parameters to configure K8s service(s) used to externally access MongoDB
## A new service per broker will be created
##
service:
## Service type. Allowed values: LoadBalancer or NodePort
##
type: LoadBalancer
## Port used when service type is LoadBalancer
##
port: 27017
## Array of load balancer IPs for each MongoDB node. Length must be the same as replicaCount
## Example:
## loadBalancerIPs:
## - X.X.X.X
## - Y.Y.Y.Y
##
loadBalancerIPs: []
## Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## Example:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## Array of node ports used for each MongoDB node. Length must be the same as replicaCount
## Example:
## nodePorts:
## - 30001
## - 30002
##
nodePorts: []
## When service type is NodePort, you can specify the domain used for MongoDB advertised hostnames.
## If not specified, the container will try to get the kubernetes node external IP
##
# domain: mydomain.com
## Provide any additional annotations which may be required. Evaluated as a template
##
annotations: {}
##
## MongoDB Arbiter parameters.
##
arbiter:
## Enable deploying the MongoDB Arbiter
## https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
enabled: true
## MongoDB configuration file for the Arbiter. For documentation of all options, see:
## http://docs.mongodb.org/manual/reference/configuration-options/
##
configuration: ""
## ConfigMap with MongoDB configuration for the Arbiter
## NOTE: When it's set the arbiter.configuration parameter is ignored
##
# existingConfigmap:
## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:
## Additional command line flags
## Example:
## extraFlags:
## - "--wiredTigerCacheSizeGB=2"
##
extraFlags: []
## Additional environment variables to set
## E.g:
## extraEnvVars:
## - name: FOO
## value: BAR
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
# extraEnvVarsCM:
## Secret with extra environment variables
##
# extraEnvVarsSecret:
## Annotations to be added to the Arbiter statefulset. Evaluated as a template.
##
annotations: {}
## Additional to be added to the Arbiter statefulset. Evaluated as a template.
##
labels: {}
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Lables for MongoDB Arbiter pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Annotations for MongoDB Arbiter pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## MongoDB Arbiter pods' priority.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## MongoDB Arbiter pods' Security Context.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
enabled: true
fsGroup: 1001
## sysctl settings
## Example:
## sysctls:
## - name: net.core.somaxconn
## value: "10000"
##
sysctls: []
## MongoDB Arbiter containers' Security Context (only main container).
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
containerSecurityContext:
enabled: true
runAsUser: 1001
## MongoDB Arbiter containers' resource requests and limits.
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## MongoDB Arbiter pods' liveness and readiness probes. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Custom Liveness probes for MongoDB Arbiter pods
##
customLivenessProbe: {}
## Custom Rediness probes MongoDB Arbiter pods
##
customReadinessProbe: {}
## Add init containers to the MongoDB Arbiter pods.
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the MongoDB Arbiter pods.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## extraVolumes and extraVolumeMounts allows you to mount other volumes on MongoDB Arbiter pods
## Examples:
## extraVolumeMounts:
## - name: extras
## mountPath: /usr/share/extras
## readOnly: true
## extraVolumes:
## - name: extras
## emptyDir: {}
extraVolumeMounts: []
extraVolumes: []
## MongoDB Arbiter Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: false
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the rabbitmq.fullname template
##
# name:
## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
## Specifies whether RBAC rules should be created
## binding MongoDB ServiceAccount to a role
## that allows MongoDB pods querying the K8s API
##
create: true
## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
enabled: false
## Bitnami Minideb image
## ref: https://hub.docker.com/r/bitnami/minideb/tags/
##
image:
registry: docker.io
repository: bitnami/minideb
tag: buster
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## Init container Security Context
## Note: the chown of the data folder is done to containerSecurityContext.runAsUser
## and not the below volumePermissions.securityContext.runAsUser
## When runAsUser is set to special value "auto", init container will try to chwon the
## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed).
## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with
## podSecurityContext.enabled=false,containerSecurityContext.enabled=false and shmVolume.chmod.enabled=false
##
securityContext:
runAsUser: 0
## Prometheus Exporter / Metrics
##
metrics:
enabled: true
## Bitnami MongoDB Promtheus Exporter image
## ref: https://hub.docker.com/r/bitnami/mongodb-exporter/tags/
##
image:
registry: docker.io
repository: bitnami/mongodb-exporter
tag: 0.11.1-debian-10-r32
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## String with extra flags to the metrics exporter
## ref: https://github.com/percona/mongodb_exporter/blob/master/mongodb_exporter.go
##
extraFlags: ""
## String with additional URI options to the metrics exporter
## ref: https://docs.mongodb.com/manual/reference/connection-string
##
extraUri: ""
## Metrics exporter container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## Prometheus Exporter service configuration
##
service:
## Annotations for Prometheus Exporter pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.service.port }}"
prometheus.io/path: "/metrics"
type: ClusterIP
port: 9216
## Metrics exporter liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
##
livenessProbe:
enabled: true
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 1
## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator
## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
##
serviceMonitor:
## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
enabled: true
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
## Specify the interval at which metrics should be scraped
##
interval: 30s
## Specify the timeout after which the scrape is ended
##
# scrapeTimeout: 30s
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
additionalLabels: {}
## Custom PrometheusRule to be defined
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
##
prometheusRule:
enabled: false
additionalLabels: {}
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
## Define individual alerting rules as required
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
##
rules: {}
Installing prometheus using the "prometheus-community/kube-prometheus-stack" helm chart could be quite an extensive topic in itself considering the fact that it has a lot of configurable options.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some documentation around that.
We've installed it with setting "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels" with a label value. Something like this
serviceMonitorSelector:
matchLabels:
monitoring-platform: core-prometheus
As for mongodb helm chart, install it "metrics.enabled=true", "metrics.serviceMonitor.enabled=true" & "metrics.serviceMonitor.additionalLabels" set to value similar to the label defined in prometheus serviceMonitorSelector (monitoring-platform: core-prometheus in this case). Something like this:
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
monitoring-platform: core-prometheus
This would enable prometheus scrape metrics from mongodb and subsequently show up in Grafana.
grafana-mongodb-dashboard
when you deploy kube-prometheus-stack with helm it will have the default label value of 'release: <your-mongodb-helm-release-name>'.
So, On MongoDB, you need to set this label value to "metrics.serviceMonitor.additionalLabels".
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: <your-mongodb-helm-release-name>

Error - Unable to attach or mount volumes: unmounted volumes=[data]

I have had weird problems in kubernetes. When I run install command, pods never started. Pvc was bound. It gave errors below order
0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[rabbitmq-token-xl9kq configuration data]: timed out waiting for the condition
attachdetach-controller AttachVolume.Attach failed for volume "pvc-08de562a-2ee2-4c81-9b34-d58736b48120" : attachdetachment timeout for volume 0001-0009-rook-ceph-0000000000000001-83154669-0997-11eb-a1ec-726af9b2e1e1
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[configuration data rabbitmq-token-xl9kq]: timed out waiting for the condition
I installed rabbitmq over helm.
helm install rabbitmq --namespace rabbitmq -f rabbitmq-values.yaml bitnami/rabbitmq
Here is my rabbitmq_values.yaml file
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Bitnami RabbitMQ image version
## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/
##
image:
registry: docker.io
repository: bitnami/rabbitmq
tag: 3.8.9-debian-10-r0
## set to true if you would like to see extra information on logs
## it turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
##
debug: false
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## String to partially override rabbitmq.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override rabbitmq.fullname template
##
# fullnameOverride:
## Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## RabbitMQ Authentication parameters
##
auth:
## RabbitMQ application username
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
username: rabbitmq
## RabbitMQ application password
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
password: Qwe123.
# existingPasswordSecret: name-of-existing-secret
## Erlang cookie to determine whether different nodes are allowed to communicate with each other
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
erlangCookie: SWQOKODSQALRPCLNMEQGM4MCSB
# existingErlangSecret: name-of-existing-secret
## Enable encryption to rabbitmq
## ref: https://www.rabbitmq.com/ssl.html
##
tls:
enabled: false
failIfNoPeerCert: true
sslOptionsVerify: verify_peer
caCertificate: |-
serverCertificate: |-
serverKey: |-
# existingSecret: name-of-existing-secret-to-rabbitmq
## Value for the RABBITMQ_LOGS environment variable
## ref: https://www.rabbitmq.com/logging.html#log-file-location
##
logs: '-'
## RabbitMQ Max File Descriptors
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits
##
ulimitNofiles: '65536'
## RabbitMQ maximum available scheduler threads and online scheduler threads. By default it will create a thread per CPU detected, with the following parameters you can tune it manually.
## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads
## ref: https://github.com/bitnami/charts/issues/2189
##
# maxAvailableSchedulers: 2
# onlineSchedulers: 1
## The memory threshold under which RabbitMQ will stop reading from client network sockets, in order to avoid being killed by the OS
## ref: https://www.rabbitmq.com/alarms.html
## ref: https://www.rabbitmq.com/memory.html#threshold
##
memoryHighWatermark:
enabled: true
## Memory high watermark type. Either absolute or relative
##
type: "relative"
## Memory high watermark value.
## The default value of 0.4 stands for 40% of availalbe RAM
## Note: the memory relative limit is applied to the resource.limits.memory to caculate the memory threshold
## You can also use an absolute value, e.g.: 256MB
##
value: 0.4
## Plugins to enable
##
plugins: "rabbitmq_management rabbitmq_peer_discovery_k8s"
## Community plugins to download during container initialization.
## Combine it with extraPlugins to also enable them.
##
# communityPlugins:
## Extra plugins to enable
## Use this instead of `plugins` to add new plugins
##
extraPlugins: "rabbitmq_auth_backend_ldap"
## Clustering settings
##
clustering:
addressType: hostname
## Rebalance master for queues in cluster when new replica is created
## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance
##
rebalance: false
## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an
## unknown order.
## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
##
forceBoot: false
## Loading a RabbitMQ definitions file to configure RabbitMQ
##
loadDefinition:
enabled: false
## Can be templated if needed, e.g.
## existingSecret: "{{ .Release.Name }}-load-definition"
##
# existingSecret:
## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:
## Additional environment variables to set
## E.g:
## extraEnvVars:
## - name: FOO
## value: BAR
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
# extraEnvVarsCM:
## Secret with extra environment variables
##
# extraEnvVarsSecret:
## Extra ports to be included in container spec, primarily informational
## E.g:
## extraContainerPorts:
## - name: new_port_name
## containerPort: 1234
##
extraContainerPorts: []
## Configuration file content: required cluster configuration
## Do not override unless you know what you are doing.
## To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead
##
configuration: |-
## Username and password
default_user = {{ .Values.auth.username }}
default_pass = CHANGEME
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }}
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator = min-masters
# enable guest user
loopback_users.guest = false
{{ tpl .Values.extraConfiguration . }}
{{- if .Values.auth.tls.enabled }}
ssl_options.verify = {{ .Values.auth.tls.sslOptionsVerify }}
listeners.ssl.default = {{ .Values.service.tlsPort }}
ssl_options.fail_if_no_peer_cert = {{ .Values.auth.tls.failIfNoPeerCert }}
ssl_options.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem
ssl_options.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem
ssl_options.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem
{{- end }}
{{- if .Values.ldap.enabled }}
auth_backends.1 = rabbit_auth_backend_ldap
auth_backends.2 = internal
{{- range $index, $server := .Values.ldap.servers }}
auth_ldap.servers.{{ add $index 1 }} = {{ $server }}
{{- end }} auth_ldap.port = {{ .Values.ldap.port }}
auth_ldap.user_dn_pattern = {{ .Values.ldap.user_dn_pattern }}
{{- if .Values.ldap.tls.enabled }}
auth_ldap.use_ssl = true
{{- end }}
{{- end }}
{{- if .Values.metrics.enabled }}
## Prometheus metrics
prometheus.tcp.port = 9419
{{- end }}
{{- if .Values.memoryHighWatermark.enabled }}
## Memory Threshold
total_memory_available_override_value = {{ include "rabbitmq.toBytes" .Values.resources.limits.memory }}
vm_memory_high_watermark.{{ .Values.memoryHighWatermark.type }} = {{ .Values.memoryHighWatermark.value }}
{{- end }}
## Configuration file content: extra configuration
## Use this instead of `configuration` to add more configuration
##
extraConfiguration: |-
#default_vhost = {{ .Release.Namespace }}-vhost
#disk_free_limit.absolute = 50MB
#load_definitions = /app/load_definition.json
## Configuration file content: advanced configuration
## Use this as additional configuraton in classic config format (Erlang term configuration format)
##
## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines.
## advancedConfiguration: |-
## [{
## rabbitmq_auth_backend_ldap,
## [{
## ssl_options,
## [{
## verify, verify_none
## }, {
## fail_if_no_peer_cert,
## false
## }]
## ]}
## }].
##
advancedConfiguration: |-
## LDAP configuration
##
ldap:
enabled: false
## List of LDAP servers hostnames
##
servers: []
## LDAP servers port
##
port: "389"
## Pattern used to translate the provided username into a value to be used for the LDAP bind
## ref: https://www.rabbitmq.com/ldap.html#usernames-and-dns
##
user_dn_pattern: cn=${username},dc=example,dc=org
tls:
## If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter.
##
enabled: false
## extraVolumes and extraVolumeMounts allows you to mount other volumes
## Examples:
## extraVolumeMounts:
## - name: extras
## mountPath: /usr/share/extras
## readOnly: true
## extraVolumes:
## - name: extras
## emptyDir: {}
extraVolumeMounts: []
extraVolumes: []
## Optionally specify extra secrets to be created by the chart.
## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded.
## Example:
## extraSecrets:
## load-definition:
## load_definition.json: |
## {
## ...
## }
##
extraSecrets: {}
## Number of RabbitMQ replicas to deploy
##
replicaCount: 3
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## RabbitMQ should be initialized one by one when building cluster for the first time.
## Therefore, the default value of podManagementPolicy is 'OrderedReady'
## Once the RabbitMQ participates in the cluster, it waits for a response from another
## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster.
## If the cluster exits gracefully, you do not need to change the podManagementPolicy
## because the first RabbitMQ of the statefulset always will be last of the cluster.
## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure,
## you must change podManagementPolicy to 'Parallel'.
## ref : https://www.rabbitmq.com/clustering.html#restarting
##
podManagementPolicy: OrderedReady
## Pod labels. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Pod annotations. Evaluated as a template
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## updateStrategy for RabbitMQ statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategyType: RollingUpdate
## Name of the priority class to be used by RabbitMQ pods, priority class needs to be created beforehand
## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## Affinity for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## RabbitMQ pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
fsGroup: 1001
runAsUser: 1001
## RabbitMQ containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## Example:
## containerSecurityContext:
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext: {}
## RabbitMQ containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
## RabbitMQ containers' liveness and readiness probes.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 120
timeoutSeconds: 20
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 20
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
## Custom Liveness probe
##
customLivenessProbe: {}
## Custom Rediness probe
##
customReadinessProbe: {}
## Add init containers to the pod
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the pod.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## RabbitMQ pods ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the rabbitmq.fullname template
##
# name:
## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
## Specifies whether RBAC rules should be created
## binding RabbitMQ ServiceAccount to a role
## that allows RabbitMQ pods querying the K8s API
##
create: true
persistence:
## this enables PVC templates that will create one per pod
##
enabled: true
## rabbitmq data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "rook-cephfs"
## selector can be used to match an existing PersistentVolume
## selector:
## matchLabels:
## app: my-app
selector: {}
accessMode: ReadWriteMany
## Existing PersistentVolumeClaims
## The value is evaluated as a template
## So, for example, the name can depend on .Release or .Chart
# existingClaim: ""
## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well.
##
size: 8Gi
## Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: true
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
## Enable creation of NetworkPolicy resources
##
enabled: true
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the ports RabbitMQ is listening
## on. When true, RabbitMQ will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - matchLabels:
# - role: frontend
# - matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
## Kubernetes service type
service:
type: ClusterIP
## Amqp port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
port: 5672
## Amqp Tls port
##
tlsPort: 5671
## Node port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
# nodePort: 30672
## Node port Tls
##
# tlsNodePort: 30671
## Dist port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
distPort: 25672
## Node port (Manager)
##
# distNodePort: 30676
## RabbitMQ Manager port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
managerPort: 15672
## Node port (Manager)
##
# managerNodePort: 30673
## RabbitMQ Prometheues metrics port
##
metricsPort: 9419
## Node port for metrics
##
# metricsNodePort: 30674
## Node port for EPMD Discovery
##
# epmdNodePort: 30675
## Extra ports to expose
## E.g.:
## extraPorts:
## - name: new_svc_name
## port: 1234
## targetPort: 1234
##
extraPorts: []
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## Set the ExternalIPs
##
externalIPs:
- 172.17.27.130
## Set the LoadBalancerIP
##
# loadBalancerIP:
## Service labels. Evaluated as a template
##
labels: {}
## Service annotations. Evaluated as a template
## Example:
## annotations:
## service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
##
annotations: {}
## Configure the ingress resource that allows you to access the
## RabbitMQ installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## Set to true to enable ingress record generation
##
enabled: true
## Path for the default host
##
path: /
## Set this to true in order to add the corresponding annotations for cert-manager
##
certManager: false
## When the ingress is enabled, a host pointing to this will be created
##
hostname: rabbit.csb.gov.tr
## Ingress annotations done as key:value pairs
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
##
annotations: {}
## Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
## or a custom one if you use the tls.existingSecret parameter
## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
##
tls: false
## existingSecret: name-of-existing-secret
## The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## extraHosts:
## - name: rabbitmq.local
## path: /
##
## The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## extraTls:
## - hosts:
## - rabbitmq.local
## secretName: rabbitmq.local-tls
##
## If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
##
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
##
secrets: []
## - name: rabbitmq.local-tls
## key:
## certificate:
##
## Prometheus Metrics
##
metrics:
enabled: true
plugins: "rabbitmq_prometheus"
## Prometheus pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.service.metricsPort }}"
## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator
##
serviceMonitor:
## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
##
enabled: false
## Specify the namespace in which the serviceMonitor resource will be created
##
# namespace: ""
## Specify the interval at which metrics should be scraped
##
interval: 30s
## Specify the timeout after which the scrape is ended
##
# scrapeTimeout: 30s
## Specify Metric Relabellings to add to the scrape endpoint
##
# relabellings:
## Specify honorLabels parameter to add the scrape endpoint
##
honorLabels: false
## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work
##
# release: ""
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
additionalLabels: {}
## Custom PrometheusRule to be defined
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
##
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
## List of rules, used as template by Helm.
## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html
# rules:
# - alert: RabbitmqDown
# expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0
# for: 5m
# labels:
# severity: error
# annotations:
# summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }})
# description: RabbitMQ node down
# - alert: ClusterDown
# expr: |
# sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"})
# < {{ .Values.replicaCount }}
# for: 5m
# labels:
# severity: error
# annotations:
# summary: Cluster down (instance {{ "{{ $labels.instance }}" }})
# description: |
# Less than {{ .Values.replicaCount }} nodes running in RabbitMQ cluster
# VALUE = {{ "{{ $value }}" }}
# - alert: ClusterPartition
# expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0
# for: 5m
# labels:
# severity: error
# annotations:
# summary: Cluster partition (instance {{ "{{ $labels.instance }}" }})
# description: |
# Cluster partition
# VALUE = {{ "{{ $value }}" }}
# - alert: OutOfMemory
# expr: |
# rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"}
# / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"}
# * 100 > 90
# for: 5m
# labels:
# severity: warning
# annotations:
# summary: Out of memory (instance {{ "{{ $labels.instance }}" }})
# description: |
# Memory available for RabbmitMQ is low (< 10%)\n VALUE = {{ "{{ $value }}" }}
# LABELS: {{ "{{ $labels }}" }}
# - alert: TooManyConnections
# expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000
# for: 5m
# labels:
# severity: warning
# annotations:
# summary: Too many connections (instance {{ "{{ $labels.instance }}" }})
# description: |
# RabbitMQ instance has too many connections (> 1000)
# VALUE = {{ "{{ $value }}" }}\n LABELS: {{ "{{ $labels }}" }}
rules: []
## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
enabled: false
## Bitnami Minideb image
## ref: https://hub.docker.com/r/bitnami/minideb/tags/
##
image:
registry: docker.io
repository: bitnami/minideb
tag: buster
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
kubectl describe pod rabbitmq-0 :
kubectl get pv
kubectl get pvc
kubectl get sc
Lastly here is my "lsblk -f" run command one node:
I have used old cluster.yaml file and added 'allowUninstallWithVolumes: false' under cleanupPolicy. That solves everything.

ejabberd how to get admin link active?

(this is not a local machine problem nor local server)
I'm not able to connect admin on my domain and server ip too
mydomain.com:5280/admin/
on ssh terminal i started ejabberd , status is:
The node 'ejabberd#mydomain.com' is started with status: started
ejabberd 15.11 is running in that node
why im not able to connect it on domain and server ip? is there any way to check on which link i can open admin?
I installed it on ubuntu server, ejabberd.yml file is
###
###' ejabberd configuration file
###
###
### The parameters used in this configuration file are explained in more detail
### in the ejabberd Installation and Operation Guide.
### Please consult the Guide in case of doubts, it is included with
### your copy of ejabberd, and is also available online at
### http://www.process-one.net/en/ejabberd/docs/
### The configuration file is written in YAML.
### Refer to http://en.wikipedia.org/wiki/YAML for the brief description.
### However, ejabberd treats different literals as different types:
###
### - unquoted or single-quoted strings. They are called "atoms".
### Example: dog, 'Jupiter', '3.14159', YELLOW
###
### - numeric literals. Example: 3, -45.0, .0
###
### - quoted or folded strings.
### Examples of quoted string: "Lizzard", "orange".
### Example of folded string:
### > Art thou not Romeo,
### and a Montague?
###. =======
###' LOGGING
##
## loglevel: Verbosity of log files generated by ejabberd.
## 0: No ejabberd log at all (not recommended)
## 1: Critical
## 2: Error
## 3: Warning
## 4: Info
## 5: Debug
## loglevel: 4
##
## rotation: Describe how to rotate logs. Either size and/or date can trigger
## log rotation. Setting count to N keeps N rotated logs. Setting count to 0
## does not disable rotation, it instead rotates the file and keeps no previous
## versions around. Setting size to X rotate log when it reaches X bytes.
## To disable rotation set the size to 0 and the date to ""
## Date syntax is taken from the syntax newsyslog uses in newsyslog.conf.
## Some examples:
## $D0 rotate every night at midnight
## $D23 rotate every day at 23:00 hr
## $W0D23 rotate every week on Sunday at 23:00 hr
## $W5D16 rotate every week on Friday at 16:00 hr
## $M1D0 rotate on the first day of every month at midnight
## $M5D6 rotate on every 5th day of the month at 6:00 hr
## log_rotate_size: 10485760 log_rotate_date: "" log_rotate_count: 1
##
## overload protection: If you want to limit the number of messages per second
## allowed from error_logger, which is a good idea if you want to avoid a flood
## of messages when system is overloaded, you can set a limit.
## 100 is ejabberd's default. log_rate_limit: 100
##
## watchdog_admins: Only useful for developers: if an ejabberd process
## consumes a lot of memory, send live notifications to these XMPP
## accounts.
##
## watchdog_admins:
## - "admin#mydomain.com"
###. ===============
###' NODE PARAMETERS
##
## net_ticktime: Specifies net_kernel tick time in seconds. This options must have
## identical value on all nodes, and in most cases shouldn't be changed at all from
## default value.
##
## net_ticktime: 60
###. ================
###' SERVED HOSTNAMES
##
## hosts: Domains served by ejabberd.
## You can define one or several, for example:
## hosts:
## - "example.net"
## - "example.com"
## - "example.org"
## hosts:
- "mydomain.com" # server domain
- "192.168.1.1" # my server ip address
##
## route_subdomains: Delegate subdomains to other XMPP servers.
## For example, if this ejabberd serves example.org and you want
## to allow communication with an XMPP server called im.example.org.
##
## route_subdomains: s2s
###. ===============
###' LISTENING PORTS
##
## listen: The ports ejabberd will listen on, which service each is handled
## by and what options to start it with.
## listen:
-
port: 5222
module: ejabberd_c2s
certfile: "/home/jack/ejabberd/conf/server.pem"
starttls: true
## To enforce TLS encryption for client connections,
## use this instead of the "starttls" option:
## starttls_required: true
##
## Custom OpenSSL options
##
protocol_options:
- "no_sslv3"
## - "no_tlsv1"
max_stanza_size: 65536
shaper: c2s_shaper
access: c2s
-
port: 5269
module: ejabberd_s2s_in
max_stanza_size: 131072
shaper: s2s_shaper ## ## ejabberd_service: Interact with external components (transports, ...) ## ## - ## port: 8888
## module: ejabberd_service ## access: all ## shaper_rule: fast ## ip: "127.0.0.1" ## hosts: ## "icq.example.org":
## password: "secret" ## "sms.example.org": ## password: "secret"
## ## ejabberd_stun: Handles STUN Binding requests ## ## -
## port: 3478 ## transport: udp ## module: ejabberd_stun
## ## To handle XML-RPC requests that provide admin credentials:
## ## - ## port: 4560 ## module: ejabberd_xmlrpc ## maxsessions: 10 ## timeout: 5000 ## access_commands: ##
xmlrpc: ## commands: all ## options: []
-
port: 5280
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
"/bosh": mod_bosh
"/oauth": ejabberd_oauth
"/api": mod_http_api
## "/pub/archive": mod_http_fileserver
web_admin: true
http_bind: true
register: true
captcha: false
###. ==================
###' S2S GLOBAL OPTIONS
##
## s2s_use_starttls: Enable STARTTLS + Dialback for S2S connections.
## Allowed values are: false optional required required_trusted
## You must specify a certificate file.
##
## s2s_use_starttls: optional
##
## s2s_certfile: Specify a certificate file.
##
## s2s_certfile: "/home/jack/ejabberd/conf/server.pem"
## Custom OpenSSL options
##
## s2s_protocol_options:
## - "no_sslv3"
## - "no_tlsv1"
##
## domain_certfile: Specify a different certificate for each served hostname.
##
## host_config:
## "example.org":
## domain_certfile: "/home/jack/ejabberd/conf/example_org.pem"
## "example.com":
## domain_certfile: "/home/jack/ejabberd/conf/example_com.pem"
##
## S2S whitelist or blacklist
##
## Default s2s policy for undefined hosts.
##
## s2s_access: s2s
##
## Outgoing S2S options
##
## Preferred address families (which to try first) and connect timeout
## in milliseconds.
##
## outgoing_s2s_families:
## - ipv4
## - ipv6
## outgoing_s2s_timeout: 10000
###. ==============
###' AUTHENTICATION
##
## auth_method: Method used to authenticate the users.
## The default method is the internal.
## If you want to use a different method,
## comment this line and enable the correct ones.
## auth_method: internal
##
## Store the plain passwords or hashed for SCRAM:
## auth_password_format: plain
## auth_password_format: scram
##
## Define the FQDN if ejabberd doesn't detect it:
## fqdn: "server3.example.com"
##
## Authentication using external script
## Make sure the script is executable by ejabberd.
##
## auth_method: external
## extauth_program: "/path/to/authentication/script"
##
## Authentication using ODBC
## Remember to setup a database in the next section.
##
## auth_method: odbc
##
## Authentication using PAM
##
## auth_method: pam
## pam_service: "pamservicename"
##
## Authentication using LDAP
##
## auth_method: ldap
##
## List of LDAP servers:
## ldap_servers:
## - "localhost"
##
## Encryption of connection to LDAP servers:
## ldap_encrypt: none
## ldap_encrypt: tls
##
## Port to connect to on LDAP servers:
## ldap_port: 389
## ldap_port: 636
##
## LDAP manager:
## ldap_rootdn: "dc=example,dc=com"
##
## Password of LDAP manager:
## ldap_password: "******"
##
## Search base of LDAP directory:
## ldap_base: "dc=example,dc=com"
##
## LDAP attribute that holds user ID:
## ldap_uids:
## - "mail": "%u#mail.example.org"
##
## LDAP filter:
## ldap_filter: "(objectClass=shadowAccount)"
##
## Anonymous login support:
## auth_method: anonymous
## anonymous_protocol: sasl_anon | login_anon | both
## allow_multiple_connections: true | false
##
## host_config:
## "public.example.org":
## auth_method: anonymous
## allow_multiple_connections: false
## anonymous_protocol: sasl_anon
##
## To use both anonymous and internal authentication:
##
## host_config:
## "public.example.org":
## auth_method:
## - internal
## - anonymous
###. ==============
###' DATABASE SETUP
## ejabberd by default uses the internal Mnesia database,
## so you do not necessarily need this section.
## This section provides configuration examples in case
## you want to use other database backends.
## Please consult the ejabberd Guide for details on database creation.
##
## MySQL server:
##
## odbc_type: mysql
## odbc_server: "server"
## odbc_database: "database"
## odbc_username: "username"
## odbc_password: "password"
##
## If you want to specify the port:
## odbc_port: 1234
##
## PostgreSQL server:
##
## odbc_type: pgsql
## odbc_server: "server"
## odbc_database: "database"
## odbc_username: "username"
## odbc_password: "password"
##
## If you want to specify the port:
## odbc_port: 1234
##
## If you use PostgreSQL, have a large database, and need a
## faster but inexact replacement for "select count(*) from users"
##
## pgsql_users_number_estimate: true
##
## SQLite:
##
## odbc_type: sqlite
## odbc_database: "/home/jack/ejabberd/database/ejabberd.db"
##
## ODBC compatible or MSSQL server:
##
## odbc_type: odbc
## odbc_server: "DSN=ejabberd;UID=ejabberd;PWD=ejabberd"
##
## Number of connections to open to the database for each virtual host
##
## odbc_pool_size: 10
##
## Interval to make a dummy SQL request to keep the connections to the
## database alive. Specify in seconds: for example 28800 means 8 hours
##
## odbc_keepalive_interval: undefined
###. ===============
###' TRAFFIC SHAPERS
shaper: ## ## The "normal" shaper limits traffic speed to 1000 B/s
## normal: 1000
## ## The "fast" shaper limits traffic speed to 50000 B/s ##
fast: 50000
##
## This option specifies the maximum number of elements in the queue
## of the FSM. Refer to the documentation for details.
## max_fsm_queue: 1000
###. ====================
###' ACCESS CONTROL LISTS acl: ## ## The 'admin' ACL grants administrative privileges to XMPP accounts. ## You can put here as
many accounts as you want. ## admin:
user:
- "admin": "mydomain.com"
## ## Blocked users ## ## blocked: ## user: ## -
"baduser": "example.org" ## - "test"
## Local users: don't modify this. ## local:
user_regexp: ""
## ## More examples of ACLs ## ## jabberorg: ## server:
## - "jabber.org" ## aleksey: ## user: ## - "aleksey": "jabber.ru" ## test: ## user_regexp: "^test" ## user_glob:
"test*"
## ## Loopback network ## loopback:
ip:
- "192.168.1.1" #added server ip address
- "::1/128"
- "::FFFF:127.0.0.1/128" ## ## Bad XMPP servers ## ## bad_servers: ## server: ## - "xmpp.zombie.org" ## -
"xmpp.spam.com"
##
## Define specific ACLs in a virtual host.
##
## host_config:
## "localhost":
## acl:
## admin:
## user:
## - "bob-local": "localhost"
###. ============
###' ACCESS RULES access_rules: ## Maximum number of simultaneous sessions allowed for a single user: max_user_sessions:
all: 10 ## Maximum number of offline messages that users can have: max_user_offline_messages:
admin: 5000
all: 100 ## This rule allows access only for local users: local:
local: allow ## Only non-blocked users can use c2s connections: c2s:
blocked: deny
all: allow ## For C2S connections, all users except admins use the "normal" shaper c2s_shaper:
admin: none
all: normal ## All S2S connections use the "fast" shaper s2s_shaper:
all: fast ## Only admins can send announcement messages: announce:
admin: allow ## Only admins can use the configuration interface: configure:
admin: allow ## Admins of this server are also admins of the MUC service: muc_admin:
admin: allow ## Only accounts of the local ejabberd server can create rooms: muc_create:
local: allow ## All users are allowed to use the MUC service: muc:
all: allow ## Only accounts on the local ejabberd server can create Pubsub nodes: pubsub_createnode:
local: allow ## In-band registration allows registration of any possible username. ## To disable in-band registration, replace
'allow' with 'deny'. register:
all: allow ## Only allow to register from localhost trusted_network:
loopback: allow ## Do not establish S2S connections with bad servers ## s2s: ## bad_servers: deny ## all: allow
## By default the frequency of account registrations from the same IP
## is limited to 1 account every 10 minutes. To disable, specify: infinity registration_timeout: infinity
##
## Define specific Access Rules in a virtual host.
##
## host_config:
## "localhost":
## access:
## c2s:
## admin: allow
## all: deny
## register:
## all: deny
###. ================
###' DEFAULT LANGUAGE
##
## language: Default language used for server messages.
## language: "en"
##
## Set a different default language in a virtual host.
##
## host_config:
## "localhost":
## language: "ru"
###. =======
###' CAPTCHA
##
## Full path to a script that generates the image.
##
## captcha_cmd: "/home/jack/ejabberd/lib/ejabberd-15.11/priv/bin/captcha.sh"
##
## Host for the URL and port where ejabberd listens for CAPTCHA requests.
##
## captcha_host: "example.org:5280"
##
## Limit CAPTCHA calls per minute for JID/IP to avoid DoS.
##
## captcha_limit: 5
###. =======
###' MODULES
##
## Modules enabled in all ejabberd virtual hosts.
## modules: mod_adhoc: {} mod_admin_extra: {} mod_announce: # recommends mod_adhoc
access: announce mod_blocking: {} # requires mod_privacy mod_caps: {} mod_carboncopy: {} mod_client_state: {}
mod_configure: {} # requires mod_adhoc mod_disco: {} ## mod_echo:
{} ## mod_irc: {} mod_http_bind: {} ## mod_http_fileserver: ##
docroot: "/var/www" ## accesslog:
"/home/jack/ejabberd/logs/access.log" mod_last: {} mod_muc:
## host: "conference.#HOST#"
access: muc
access_create: muc_create
access_persistent: muc_create
access_admin: muc_admin mod_muc_admin: {} ## mod_muc_log: {} ## mod_multicast: {} mod_offline:
access_max_user_messages: max_user_offline_messages mod_ping: {} ## mod_pres_counter: ## count: 5 ## interval: 60 mod_privacy: {} mod_private: {} ## mod_proxy65: {} mod_pubsub:
access_createnode: pubsub_createnode
## reduces resource comsumption, but XEP incompliant
ignore_pep_from_offline: true
## XEP compliant, but increases resource comsumption
## ignore_pep_from_offline: false
last_item_cache: false
plugins:
- "flat"
- "hometree"
- "pep" # pep requires mod_caps mod_register:
##
## Protect In-Band account registrations with CAPTCHA.
##
## captcha_protected: true
##
## Set the minimum informational entropy for passwords.
##
## password_strength: 32
##
## After successful registration, the user receives
## a message with this subject and body.
##
welcome_message:
subject: "Welcome!"
body: |-
Hi.
Welcome to this XMPP server.
##
## When a user registers, send a notification to
## these XMPP accounts.
##
## registration_watchers:
## - "admin1#example.org"
##
## Only clients in the server machine can register accounts
##
## ip_access: trusted_network
##
## Local c2s or remote s2s users cannot register accounts
##
## access_from: deny
access_from: register mod_roster: {} mod_shared_roster: {} ## mod_stats: {} ## mod_time: {} mod_vcard: {} mod_version: {}
##
## Enable modules with custom options in a specific virtual host
##
## host_config:
## "localhost":
## modules:
## mod_echo:
## host: "mirror.localhost"
##
## Enable modules management via ejabberdctl for installation and
## uninstallation of public/private contributed modules
## (enabled by default)
##
allow_contrib_modules: true
###.
###'
### Local Variables:
### mode: yaml
### End:
### vim: set filetype=yaml tabstop=8 foldmarker=###',###. foldmethod=marker:
After trying lot on virtual server , i got to know from server providers they cant open outbound ip for 5280 , 5222 which require to run mydomain.com:5280/admin web admin interface. and its not open for users.
Inbound ip is open for us , so the reason im able to access it by ssh. and on ssh terminal i started ejabberd.
Solution : server must need to open inbound and outbound ports. i updated my server and able to open ports on server. which works then.

ejabberd: can't seem to enable Stream Management

Heyo,
I'm having a bit of a headache trying to get Stream Management (XEP-0198) working in ejabberd 17.04 in Ubuntu 16.10, though I've had this problem since ejabberd 17.03, and in spite of searching just about everywhere, I can't seem to get find a straight answer beyond either explicitly adding stream_management: true to my config or leaving it out and letting that setting default to true. None of which seem to be offering any success, however.
My only indication that Stream Management isn't working at the moment is via the Android app Conversations, which lists the extension as Unavailable, though the app picks up the extension from another server just fine. I can't seem to see any errors in ejabberd's logs either, barring the one time I caused a syntax error that's since been corrected.
This is my current config (yes, I know, it's adapted form a sample and I need to clean some junk out):
##
### ejabberd configuration file
### Archipel Sample default condiguration
define_macro:
'CERT_LOCATION': "/certs/live/social.diskseven.com/ejabberd.pem"
'DH_PARAMS': "/certs/live/social.diskseven.com/dhparams.pem"
### =========
### DEBUGGING
# Increase this if you want sone insane erlang debug
loglevel: 3
### ================
### SERVED HOSTNAMES
# Change it for you FQDN
hosts:
- "xmpp.diskseven.com"
### ===============
### LISTENING PORTS
listen:
-
#it's a good idea to put xmlrpc behing a reverse proxy
#because you can't use tls directly, make it listen to localhost
ip: "::1"
# and read the Security section on the wiki
port: 4560
module: ejabberd_xmlrpc
access_commands:
xmlrpcaccess:
all : []
## ejabberd c2s
-
ip: "::"
port: 5222
stream_management: true
module: ejabberd_c2s
resend_on_timeout: if_offline
##
## If you installed a SSL
## certificate, specify the full path to the
## file and uncomment this line:
##
certfile: 'CERT_LOCATION'
starttls: true
starttls_required: true
ciphers: "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
protocol_options:
- "no_sslv2"
- "no_sslv3"
- "no_tlsv1"
- "no_tlsv1_1"
max_stanza_size: 65536000
shaper: c2s_shaper
access: c2s
## ejabbed s2s
-
ip: "::"
port: 5269
module: ejabberd_s2s_in
max_stanza_size: 65536000
## ejabberd http/s and websocket/s
-
ip: "::"
port: 5280
module: ejabberd_http
request_handlers:
"/xmpp": ejabberd_http_ws
# if you want to use starttls with websock
# the URI will be wss://
# please be sure that the certificate belong
# to a trusted AC in your browser
certfile: 'CERT_LOCATION'
dhfile: 'DH_PARAMS'
# tls: true
web_admin: true
http_bind: true
### ===
### S2S
s2s_access: all
s2s_use_starttls: required
s2s_certfile: 'CERT_LOCATION' #concantinated cert.
s2s_dhfile: 'DH_PARAMS'
s2s_ciphers: "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
s2s_protocol_options:
- "no_sslv2"
- "no_sslv3"
- "no_tlsv1"
- "no_tlsv1_1"
## domain_certfile: Specify a different certificate for each served hostname.
##
##host_config:
## "xmpp.diskseven.com":
## domain_certfile: 'CERT_LOCATION'
## "conference.xmpp.diskseven.com":
## domain_certfile: 'CERT_LOCATION'
### ==============
### AUTHENTICATION
auth_method: internal
### ===============
### TRAFFIC SHAPERS
shaper:
# in B/s
normal: 1000
fast: 50000000
### ====================
### ACCESS CONTROL LISTS
acl:
admin:
user:
- "admin": "xmpp.diskseven.com"
local:
user_regexp: ""
### ============
### ACCESS RULES
access:
max_user_sessions:
all: 5
local:
local: allow
c2s:
blocked: deny
all: allow
c2s_shaper:
admin: none
all: fast
s2s_shaper:
all: fast
s2s_access:
all: allow
announce:
admin: allow
configure:
admin: allow
muc_admin:
admin: allow
muc_create:
local: allow
muc:
all: allow
pubsub_createnode:
all: allow
register:
all: deny
xmlrpcaccess:
admin : allow
### Frequency of account registration
registration_timeout: 600
### ================
### DEFAULT LANGUAGE
language: "en"
### =======
### MODULES
modules:
mod_adhoc: []
mod_announce:
access: announce
mod_blocking: []
mod_caps: []
mod_client_state: []
mod_carboncopy: []
mod_configure: []
mod_disco: []
mod_http_bind:
max_inactivity: 400 # timeout valie for BOSH usefull for a large number of VM
mod_http_upload: []
mod_irc: []
mod_last: []
mod_mam: []
mod_muc:
host: "conference.#HOST#"
access: all
access_create: muc_create
access_persistent: muc_create
access_admin: muc_admin
mod_offline: []
mod_privacy: []
mod_private: []
mod_pubsub:
access_createnode: pubsub_createnode
ignore_pep_from_offline: true
last_item_cache: false
max_items_node: 1000
plugins:
- "flat"
- "hometree"
- "pep"
pep_mapping:
"urn:xmpp:microblog:0": "mb"
mod_ping:
send_pings: true
ping_interval: 60
ping_ack_timeout: 30
timeout_action: kill
mod_register:
access: register
mod_roster:
versioning: true
mod_shared_roster: []
mod_time: []
mod_vcard: []
mod_version: []
mod_admin_extra: []
# mod_fail2ban:
# c2s_auth_ban_lifetime: 1300
# c2s_max_auth_failures: 5
In ejabberd 17.03+ stream management is implemented as a separate module: mod_stream_mgmt. You should have read release notes ;)

ejabberd doesn't store roster persistently

I'm running ejabberd in Kubernetes using the following image: https://hub.docker.com/r/jprjr/ejabberd/
I've tried to test persistency by removing an account from Pidgin and adding it again. Pidgin does not load the previously added roster.
I tried Mnesia and Postgres. The Postgres database seems to be untouched by ejabberd but I don't get any erros in the logs either.
Any ideas, what could cause this behavior?
My configuration:
hosts: ["example.com"]
loglevel: 4
hide_sensitive_log_data: true
listen:
- port: 5222
module: ejabberd_c2s
access: c2s
shaper: c2s_shaper
zlib: true
starttls_required: true
starttls: true
certfile: "/etc/ejabberd/ejabberd.pem"
- port: 5269
module: ejabberd_s2s_in
shaper: s2s_shaper
max_stanza_size: 65536
s2s_use_starttls: true
s2s_certfile: "/etc/ejabberd/ejabberd.pem"
transport: tcp
auth_method: [ldap]
ldap_servers: ["ldap.example.com"]
ldap_port: 389
ldap_rootdn: "CN=ejabberd,OU=ServiceAccounts,DC=example,DC=com"
ldap_password: "*********"
ldap_base: "OU=User,DC=example,DC=com"
ldap_uids:
- "sAMAccountName": "%u"
ldap_filter: "(&(objectClass=user)(memberof=CN=ejabberdUsers,CN=Users,DC=example,DC=com))"
# tried with and w/o
# default_db: odbc
# sm_db_type: odbc
# odbc_type: pgsql
# odbc_server: "db.example.com"
# odbc_password: "********"
# odbc_port: 10051
shaper:
normal: 1000
fast: 50000
acl:
admin:
user:
"admin1": "example.com"
"admin2": "example.com"
access:
local:
local: allow
c2s:
blocked: deny
all: allow
ejabberd store roster persistently.
you need to enable mod_roster in ejabberd configuration file and database setting what you want for example
mod_roster:
db_type: odbc
it seems you are missing configuration for more detail of configuration check these links.
https://github.com/processone/ejabberd/blob/master/test/ejabberd_SUITE_data/ejabberd.yml
https://www.process-one.net/docs/ejabberd/guide_en.html