I have had weird problems in kubernetes. When I run install command, pods never started. Pvc was bound. It gave errors below order
0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[rabbitmq-token-xl9kq configuration data]: timed out waiting for the condition
attachdetach-controller AttachVolume.Attach failed for volume "pvc-08de562a-2ee2-4c81-9b34-d58736b48120" : attachdetachment timeout for volume 0001-0009-rook-ceph-0000000000000001-83154669-0997-11eb-a1ec-726af9b2e1e1
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[configuration data rabbitmq-token-xl9kq]: timed out waiting for the condition
I installed rabbitmq over helm.
helm install rabbitmq --namespace rabbitmq -f rabbitmq-values.yaml bitnami/rabbitmq
Here is my rabbitmq_values.yaml file
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Bitnami RabbitMQ image version
## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/
##
image:
registry: docker.io
repository: bitnami/rabbitmq
tag: 3.8.9-debian-10-r0
## set to true if you would like to see extra information on logs
## it turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
##
debug: false
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## String to partially override rabbitmq.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override rabbitmq.fullname template
##
# fullnameOverride:
## Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## RabbitMQ Authentication parameters
##
auth:
## RabbitMQ application username
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
username: rabbitmq
## RabbitMQ application password
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
password: Qwe123.
# existingPasswordSecret: name-of-existing-secret
## Erlang cookie to determine whether different nodes are allowed to communicate with each other
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
erlangCookie: SWQOKODSQALRPCLNMEQGM4MCSB
# existingErlangSecret: name-of-existing-secret
## Enable encryption to rabbitmq
## ref: https://www.rabbitmq.com/ssl.html
##
tls:
enabled: false
failIfNoPeerCert: true
sslOptionsVerify: verify_peer
caCertificate: |-
serverCertificate: |-
serverKey: |-
# existingSecret: name-of-existing-secret-to-rabbitmq
## Value for the RABBITMQ_LOGS environment variable
## ref: https://www.rabbitmq.com/logging.html#log-file-location
##
logs: '-'
## RabbitMQ Max File Descriptors
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits
##
ulimitNofiles: '65536'
## RabbitMQ maximum available scheduler threads and online scheduler threads. By default it will create a thread per CPU detected, with the following parameters you can tune it manually.
## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads
## ref: https://github.com/bitnami/charts/issues/2189
##
# maxAvailableSchedulers: 2
# onlineSchedulers: 1
## The memory threshold under which RabbitMQ will stop reading from client network sockets, in order to avoid being killed by the OS
## ref: https://www.rabbitmq.com/alarms.html
## ref: https://www.rabbitmq.com/memory.html#threshold
##
memoryHighWatermark:
enabled: true
## Memory high watermark type. Either absolute or relative
##
type: "relative"
## Memory high watermark value.
## The default value of 0.4 stands for 40% of availalbe RAM
## Note: the memory relative limit is applied to the resource.limits.memory to caculate the memory threshold
## You can also use an absolute value, e.g.: 256MB
##
value: 0.4
## Plugins to enable
##
plugins: "rabbitmq_management rabbitmq_peer_discovery_k8s"
## Community plugins to download during container initialization.
## Combine it with extraPlugins to also enable them.
##
# communityPlugins:
## Extra plugins to enable
## Use this instead of `plugins` to add new plugins
##
extraPlugins: "rabbitmq_auth_backend_ldap"
## Clustering settings
##
clustering:
addressType: hostname
## Rebalance master for queues in cluster when new replica is created
## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance
##
rebalance: false
## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an
## unknown order.
## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
##
forceBoot: false
## Loading a RabbitMQ definitions file to configure RabbitMQ
##
loadDefinition:
enabled: false
## Can be templated if needed, e.g.
## existingSecret: "{{ .Release.Name }}-load-definition"
##
# existingSecret:
## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:
## Additional environment variables to set
## E.g:
## extraEnvVars:
## - name: FOO
## value: BAR
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
# extraEnvVarsCM:
## Secret with extra environment variables
##
# extraEnvVarsSecret:
## Extra ports to be included in container spec, primarily informational
## E.g:
## extraContainerPorts:
## - name: new_port_name
## containerPort: 1234
##
extraContainerPorts: []
## Configuration file content: required cluster configuration
## Do not override unless you know what you are doing.
## To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead
##
configuration: |-
## Username and password
default_user = {{ .Values.auth.username }}
default_pass = CHANGEME
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }}
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator = min-masters
# enable guest user
loopback_users.guest = false
{{ tpl .Values.extraConfiguration . }}
{{- if .Values.auth.tls.enabled }}
ssl_options.verify = {{ .Values.auth.tls.sslOptionsVerify }}
listeners.ssl.default = {{ .Values.service.tlsPort }}
ssl_options.fail_if_no_peer_cert = {{ .Values.auth.tls.failIfNoPeerCert }}
ssl_options.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem
ssl_options.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem
ssl_options.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem
{{- end }}
{{- if .Values.ldap.enabled }}
auth_backends.1 = rabbit_auth_backend_ldap
auth_backends.2 = internal
{{- range $index, $server := .Values.ldap.servers }}
auth_ldap.servers.{{ add $index 1 }} = {{ $server }}
{{- end }} auth_ldap.port = {{ .Values.ldap.port }}
auth_ldap.user_dn_pattern = {{ .Values.ldap.user_dn_pattern }}
{{- if .Values.ldap.tls.enabled }}
auth_ldap.use_ssl = true
{{- end }}
{{- end }}
{{- if .Values.metrics.enabled }}
## Prometheus metrics
prometheus.tcp.port = 9419
{{- end }}
{{- if .Values.memoryHighWatermark.enabled }}
## Memory Threshold
total_memory_available_override_value = {{ include "rabbitmq.toBytes" .Values.resources.limits.memory }}
vm_memory_high_watermark.{{ .Values.memoryHighWatermark.type }} = {{ .Values.memoryHighWatermark.value }}
{{- end }}
## Configuration file content: extra configuration
## Use this instead of `configuration` to add more configuration
##
extraConfiguration: |-
#default_vhost = {{ .Release.Namespace }}-vhost
#disk_free_limit.absolute = 50MB
#load_definitions = /app/load_definition.json
## Configuration file content: advanced configuration
## Use this as additional configuraton in classic config format (Erlang term configuration format)
##
## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines.
## advancedConfiguration: |-
## [{
## rabbitmq_auth_backend_ldap,
## [{
## ssl_options,
## [{
## verify, verify_none
## }, {
## fail_if_no_peer_cert,
## false
## }]
## ]}
## }].
##
advancedConfiguration: |-
## LDAP configuration
##
ldap:
enabled: false
## List of LDAP servers hostnames
##
servers: []
## LDAP servers port
##
port: "389"
## Pattern used to translate the provided username into a value to be used for the LDAP bind
## ref: https://www.rabbitmq.com/ldap.html#usernames-and-dns
##
user_dn_pattern: cn=${username},dc=example,dc=org
tls:
## If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter.
##
enabled: false
## extraVolumes and extraVolumeMounts allows you to mount other volumes
## Examples:
## extraVolumeMounts:
## - name: extras
## mountPath: /usr/share/extras
## readOnly: true
## extraVolumes:
## - name: extras
## emptyDir: {}
extraVolumeMounts: []
extraVolumes: []
## Optionally specify extra secrets to be created by the chart.
## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded.
## Example:
## extraSecrets:
## load-definition:
## load_definition.json: |
## {
## ...
## }
##
extraSecrets: {}
## Number of RabbitMQ replicas to deploy
##
replicaCount: 3
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## RabbitMQ should be initialized one by one when building cluster for the first time.
## Therefore, the default value of podManagementPolicy is 'OrderedReady'
## Once the RabbitMQ participates in the cluster, it waits for a response from another
## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster.
## If the cluster exits gracefully, you do not need to change the podManagementPolicy
## because the first RabbitMQ of the statefulset always will be last of the cluster.
## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure,
## you must change podManagementPolicy to 'Parallel'.
## ref : https://www.rabbitmq.com/clustering.html#restarting
##
podManagementPolicy: OrderedReady
## Pod labels. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Pod annotations. Evaluated as a template
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## updateStrategy for RabbitMQ statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategyType: RollingUpdate
## Name of the priority class to be used by RabbitMQ pods, priority class needs to be created beforehand
## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## Affinity for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## RabbitMQ pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
fsGroup: 1001
runAsUser: 1001
## RabbitMQ containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## Example:
## containerSecurityContext:
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext: {}
## RabbitMQ containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 1000m
memory: 2Gi
## RabbitMQ containers' liveness and readiness probes.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 120
timeoutSeconds: 20
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 20
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
## Custom Liveness probe
##
customLivenessProbe: {}
## Custom Rediness probe
##
customReadinessProbe: {}
## Add init containers to the pod
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the pod.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## RabbitMQ pods ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the rabbitmq.fullname template
##
# name:
## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
## Specifies whether RBAC rules should be created
## binding RabbitMQ ServiceAccount to a role
## that allows RabbitMQ pods querying the K8s API
##
create: true
persistence:
## this enables PVC templates that will create one per pod
##
enabled: true
## rabbitmq data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "rook-cephfs"
## selector can be used to match an existing PersistentVolume
## selector:
## matchLabels:
## app: my-app
selector: {}
accessMode: ReadWriteMany
## Existing PersistentVolumeClaims
## The value is evaluated as a template
## So, for example, the name can depend on .Release or .Chart
# existingClaim: ""
## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well.
##
size: 8Gi
## Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: true
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
## Enable creation of NetworkPolicy resources
##
enabled: true
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the ports RabbitMQ is listening
## on. When true, RabbitMQ will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - matchLabels:
# - role: frontend
# - matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
## Kubernetes service type
service:
type: ClusterIP
## Amqp port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
port: 5672
## Amqp Tls port
##
tlsPort: 5671
## Node port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
# nodePort: 30672
## Node port Tls
##
# tlsNodePort: 30671
## Dist port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
distPort: 25672
## Node port (Manager)
##
# distNodePort: 30676
## RabbitMQ Manager port
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
##
managerPort: 15672
## Node port (Manager)
##
# managerNodePort: 30673
## RabbitMQ Prometheues metrics port
##
metricsPort: 9419
## Node port for metrics
##
# metricsNodePort: 30674
## Node port for EPMD Discovery
##
# epmdNodePort: 30675
## Extra ports to expose
## E.g.:
## extraPorts:
## - name: new_svc_name
## port: 1234
## targetPort: 1234
##
extraPorts: []
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## Set the ExternalIPs
##
externalIPs:
- 172.17.27.130
## Set the LoadBalancerIP
##
# loadBalancerIP:
## Service labels. Evaluated as a template
##
labels: {}
## Service annotations. Evaluated as a template
## Example:
## annotations:
## service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
##
annotations: {}
## Configure the ingress resource that allows you to access the
## RabbitMQ installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## Set to true to enable ingress record generation
##
enabled: true
## Path for the default host
##
path: /
## Set this to true in order to add the corresponding annotations for cert-manager
##
certManager: false
## When the ingress is enabled, a host pointing to this will be created
##
hostname: rabbit.csb.gov.tr
## Ingress annotations done as key:value pairs
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
##
annotations: {}
## Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
## or a custom one if you use the tls.existingSecret parameter
## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
##
tls: false
## existingSecret: name-of-existing-secret
## The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## extraHosts:
## - name: rabbitmq.local
## path: /
##
## The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## extraTls:
## - hosts:
## - rabbitmq.local
## secretName: rabbitmq.local-tls
##
## If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
##
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
##
secrets: []
## - name: rabbitmq.local-tls
## key:
## certificate:
##
## Prometheus Metrics
##
metrics:
enabled: true
plugins: "rabbitmq_prometheus"
## Prometheus pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.service.metricsPort }}"
## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator
##
serviceMonitor:
## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
##
enabled: false
## Specify the namespace in which the serviceMonitor resource will be created
##
# namespace: ""
## Specify the interval at which metrics should be scraped
##
interval: 30s
## Specify the timeout after which the scrape is ended
##
# scrapeTimeout: 30s
## Specify Metric Relabellings to add to the scrape endpoint
##
# relabellings:
## Specify honorLabels parameter to add the scrape endpoint
##
honorLabels: false
## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work
##
# release: ""
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
additionalLabels: {}
## Custom PrometheusRule to be defined
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
##
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
## List of rules, used as template by Helm.
## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html
# rules:
# - alert: RabbitmqDown
# expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0
# for: 5m
# labels:
# severity: error
# annotations:
# summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }})
# description: RabbitMQ node down
# - alert: ClusterDown
# expr: |
# sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"})
# < {{ .Values.replicaCount }}
# for: 5m
# labels:
# severity: error
# annotations:
# summary: Cluster down (instance {{ "{{ $labels.instance }}" }})
# description: |
# Less than {{ .Values.replicaCount }} nodes running in RabbitMQ cluster
# VALUE = {{ "{{ $value }}" }}
# - alert: ClusterPartition
# expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0
# for: 5m
# labels:
# severity: error
# annotations:
# summary: Cluster partition (instance {{ "{{ $labels.instance }}" }})
# description: |
# Cluster partition
# VALUE = {{ "{{ $value }}" }}
# - alert: OutOfMemory
# expr: |
# rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"}
# / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"}
# * 100 > 90
# for: 5m
# labels:
# severity: warning
# annotations:
# summary: Out of memory (instance {{ "{{ $labels.instance }}" }})
# description: |
# Memory available for RabbmitMQ is low (< 10%)\n VALUE = {{ "{{ $value }}" }}
# LABELS: {{ "{{ $labels }}" }}
# - alert: TooManyConnections
# expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000
# for: 5m
# labels:
# severity: warning
# annotations:
# summary: Too many connections (instance {{ "{{ $labels.instance }}" }})
# description: |
# RabbitMQ instance has too many connections (> 1000)
# VALUE = {{ "{{ $value }}" }}\n LABELS: {{ "{{ $labels }}" }}
rules: []
## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
enabled: false
## Bitnami Minideb image
## ref: https://hub.docker.com/r/bitnami/minideb/tags/
##
image:
registry: docker.io
repository: bitnami/minideb
tag: buster
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
kubectl describe pod rabbitmq-0 :
kubectl get pv
kubectl get pvc
kubectl get sc
Lastly here is my "lsblk -f" run command one node:
I have used old cluster.yaml file and added 'allowUninstallWithVolumes: false' under cleanupPolicy. That solves everything.
Related
I have the helm chart mongodb installed on my k8s cluster (https://github.com/bitnami/charts/tree/master/bitnami/mongodb).
I also have kube-prometheus-stack installed on my k8s cluster. (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
I've setup a grafana dashboard for mongodb which should pull in data from a prometheus data source. (https://grafana.com/grafana/dashboards/2583 )
However, my grafana dashboard is empty with no data.
I'm wondering if i have not configured something with the helm chart properly. Please see the mongodb helm chart below.
mognodb chart.yml
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride
namespaceOverride: mongodb
image:
## Bitnami MongoDB registry
##
registry: docker.io
## Bitnami MongoDB image name
##
repository: bitnami/mongodb
## Bitnami MongoDB image tag
## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
##
tag: 4.4.1-debian-10-r13
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns on Bitnami debugging in minideb-extras-base
## ref: https://github.com/bitnami/minideb-extras-base
debug: false
## String to partially override mongodb.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override mongodb.fullname template
##
# fullnameOverride:
## Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## MongoDB architecture. Allowed values: standalone or replicaset
##
architecture: replicaset
## Use StatefulSet instead of Deployment when deploying standalone
##
useStatefulSet: false
## MongoDB Authentication parameters
##
auth:
## Enable authentication
## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
##
enabled: true
## MongoDB root password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
rootPassword: "<redacted>"
## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
# username: username
# password: password
# database: database
## Key used for replica set authentication
## Ignored when mongodb.architecture=standalone
##
replicaSetKey: <redacted>
## Existing secret with MongoDB credentials
## NOTE: When it's set the previous parameters are ignored.
##
# existingSecret: name-of-existing-secret
## Name of the replica set
## Ignored when mongodb.architecture=standalone
##
replicaSetName: rs0
## Enable DNS hostnames in the replica set config
## Ignored when mongodb.architecture=standalone
## Ignored when externalAccess.enabled=true
##
replicaSetHostnames: true
## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
enableIPv6: false
## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
directoryPerDB: false
## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
systemLogVerbosity: 0
disableSystemLog: false
## MongoDB configuration file for Primary and Secondary nodes. For documentation of all options, see:
## http://docs.mongodb.org/manual/reference/configuration-options/
## Example:
## configuration:
## # where and how to store data.
## storage:
## dbPath: /bitnami/mongodb/data/db
## journal:
## enabled: true
## directoryPerDB: false
## # where to write logging data
## systemLog:
## destination: file
## quiet: false
## logAppend: true
## logRotate: reopen
## path: /opt/bitnami/mongodb/logs/mongodb.log
## verbosity: 0
## # network interfaces
## net:
## port: 27017
## unixDomainSocket:
## enabled: true
## pathPrefix: /opt/bitnami/mongodb/tmp
## ipv6: false
## bindIpAll: true
## # replica set options
## #replication:
## #replSetName: replicaset
## #enableMajorityReadConcern: true
## # process management options
## processManagement:
## fork: false
## pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
## # set parameter options
## setParameter:
## enableLocalhostAuthBypass: true
## # security options
## security:
## authorization: disabled
## #keyFile: /opt/bitnami/mongodb/conf/keyfile
##
configuration: ""
## ConfigMap with MongoDB configuration for Primary and Secondary nodes
## NOTE: When it's set the arbiter.configuration parameter is ignored
##
# existingConfigmap:
## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Example:
## initdbScripts:
## my_init_script.sh: |
## #!/bin/bash
## echo "Do something."
initdbScripts: {}
## Existing ConfigMap with custom init scripts
##
# initdbScriptsConfigMap:
## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:
## Additional command line flags
## Example:
## extraFlags:
## - "--wiredTigerCacheSizeGB=2"
##
extraFlags: []
## Additional environment variables to set
## E.g:
## extraEnvVars:
## - name: FOO
## value: BAR
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
# extraEnvVarsCM:
## Secret with extra environment variables
##
# extraEnvVarsSecret:
## Annotations to be added to the MongoDB statefulset. Evaluated as a template.
##
annotations: {}
## Additional labels to be added to the MongoDB statefulset. Evaluated as a template.
##
labels: {}
## Number of MongoDB replicas to deploy.
## Ignored when mongodb.architecture=standalone
##
replicaCount: 1
## StrategyType for MongoDB statefulset
## It can be set to RollingUpdate or Recreate by default.
##
strategyType: RollingUpdate
## MongoDB should be initialized one by one when building the replicaset for the first time.
##
podManagementPolicy: OrderedReady
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Lables for MongoDB pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Annotations for MongoDB pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## MongoDB pods' priority.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## MongoDB pods' Security Context.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
enabled: true
fsGroup: 1001
## sysctl settings
## Example:
## sysctls:
## - name: net.core.somaxconn
## value: "10000"
##
sysctls: []
## MongoDB containers' Security Context (main and metrics container).
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## MongoDB containers' resource requests and limits.
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## MongoDB pods' liveness and readiness probes. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Custom Liveness probes for MongoDB pods
##
customLivenessProbe: {}
## Custom Rediness probes MongoDB pods
##
customReadinessProbe: {}
## Add init containers to the MongoDB pods.
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the MongoDB pods.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## extraVolumes and extraVolumeMounts allows you to mount other volumes on MongoDB pods
## Examples:
## extraVolumeMounts:
## - name: extras
## mountPath: /usr/share/extras
## readOnly: true
## extraVolumes:
## - name: extras
## emptyDir: {}
extraVolumeMounts: []
extraVolumes: []
## MongoDB Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: true
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
## Ignored when mongodb.architecture=replicaset
##
# existingClaim:
## PV Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
# storageClass: "-"
## PV Access Mode
##
accessModes:
- ReadWriteOnce
## PVC size
##
size: 50Gi
## PVC annotations
##
annotations: {}
## The path the volume will be mounted at, useful when using different
## MongoDB images.
##
mountPath: /bitnami/mongodb
## The subdirectory of the volume to mount to, useful in dev environments
## and one PV for multiple services.
##
subPath: ""
## Service parameters
##
service:
## Service type
##
type: ClusterIP
## MongoDB service port
##
port: 27017
## MongoDB service port name
##
portName: mongodb
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
nodePort: ""
## MongoDB service clusterIP IP
##
# clusterIP: None
## Specify the externalIP value ClusterIP service type.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
##
externalIPs: []
## Specify the loadBalancerIP value for LoadBalancer service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
##
# loadBalancerIP:
## Specify the loadBalancerSourceRanges value for LoadBalancer service types.
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
loadBalancerSourceRanges: []
## Provide any additional annotations which may be required. Evaluated as a template
##
annotations: {}
## External Access to MongoDB nodes configuration
##
externalAccess:
## Enable Kubernetes external cluster access to MongoDB nodes
##
enabled: true
## External IPs auto-discovery configuration
## An init container is used to auto-detect LB IPs or node ports by querying the K8s API
## Note: RBAC might be required
##
autoDiscovery:
## Enable external IP/ports auto-discovery
##
enabled: true
## Bitnami Kubectl image
## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
##
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.18.9-debian-10-r4
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## Parameters to configure K8s service(s) used to externally access MongoDB
## A new service per broker will be created
##
service:
## Service type. Allowed values: LoadBalancer or NodePort
##
type: LoadBalancer
## Port used when service type is LoadBalancer
##
port: 27017
## Array of load balancer IPs for each MongoDB node. Length must be the same as replicaCount
## Example:
## loadBalancerIPs:
## - X.X.X.X
## - Y.Y.Y.Y
##
loadBalancerIPs: []
## Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## Example:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## Array of node ports used for each MongoDB node. Length must be the same as replicaCount
## Example:
## nodePorts:
## - 30001
## - 30002
##
nodePorts: []
## When service type is NodePort, you can specify the domain used for MongoDB advertised hostnames.
## If not specified, the container will try to get the kubernetes node external IP
##
# domain: mydomain.com
## Provide any additional annotations which may be required. Evaluated as a template
##
annotations: {}
##
## MongoDB Arbiter parameters.
##
arbiter:
## Enable deploying the MongoDB Arbiter
## https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/
enabled: true
## MongoDB configuration file for the Arbiter. For documentation of all options, see:
## http://docs.mongodb.org/manual/reference/configuration-options/
##
configuration: ""
## ConfigMap with MongoDB configuration for the Arbiter
## NOTE: When it's set the arbiter.configuration parameter is ignored
##
# existingConfigmap:
## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:
## Additional command line flags
## Example:
## extraFlags:
## - "--wiredTigerCacheSizeGB=2"
##
extraFlags: []
## Additional environment variables to set
## E.g:
## extraEnvVars:
## - name: FOO
## value: BAR
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
# extraEnvVarsCM:
## Secret with extra environment variables
##
# extraEnvVarsSecret:
## Annotations to be added to the Arbiter statefulset. Evaluated as a template.
##
annotations: {}
## Additional to be added to the Arbiter statefulset. Evaluated as a template.
##
labels: {}
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Lables for MongoDB Arbiter pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Annotations for MongoDB Arbiter pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## MongoDB Arbiter pods' priority.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## MongoDB Arbiter pods' Security Context.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
enabled: true
fsGroup: 1001
## sysctl settings
## Example:
## sysctls:
## - name: net.core.somaxconn
## value: "10000"
##
sysctls: []
## MongoDB Arbiter containers' Security Context (only main container).
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
containerSecurityContext:
enabled: true
runAsUser: 1001
## MongoDB Arbiter containers' resource requests and limits.
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## MongoDB Arbiter pods' liveness and readiness probes. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Custom Liveness probes for MongoDB Arbiter pods
##
customLivenessProbe: {}
## Custom Rediness probes MongoDB Arbiter pods
##
customReadinessProbe: {}
## Add init containers to the MongoDB Arbiter pods.
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the MongoDB Arbiter pods.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## extraVolumes and extraVolumeMounts allows you to mount other volumes on MongoDB Arbiter pods
## Examples:
## extraVolumeMounts:
## - name: extras
## mountPath: /usr/share/extras
## readOnly: true
## extraVolumes:
## - name: extras
## emptyDir: {}
extraVolumeMounts: []
extraVolumes: []
## MongoDB Arbiter Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: false
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the rabbitmq.fullname template
##
# name:
## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
## Specifies whether RBAC rules should be created
## binding MongoDB ServiceAccount to a role
## that allows MongoDB pods querying the K8s API
##
create: true
## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
enabled: false
## Bitnami Minideb image
## ref: https://hub.docker.com/r/bitnami/minideb/tags/
##
image:
registry: docker.io
repository: bitnami/minideb
tag: buster
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## Init container Security Context
## Note: the chown of the data folder is done to containerSecurityContext.runAsUser
## and not the below volumePermissions.securityContext.runAsUser
## When runAsUser is set to special value "auto", init container will try to chwon the
## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed).
## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with
## podSecurityContext.enabled=false,containerSecurityContext.enabled=false and shmVolume.chmod.enabled=false
##
securityContext:
runAsUser: 0
## Prometheus Exporter / Metrics
##
metrics:
enabled: true
## Bitnami MongoDB Promtheus Exporter image
## ref: https://hub.docker.com/r/bitnami/mongodb-exporter/tags/
##
image:
registry: docker.io
repository: bitnami/mongodb-exporter
tag: 0.11.1-debian-10-r32
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## String with extra flags to the metrics exporter
## ref: https://github.com/percona/mongodb_exporter/blob/master/mongodb_exporter.go
##
extraFlags: ""
## String with additional URI options to the metrics exporter
## ref: https://docs.mongodb.com/manual/reference/connection-string
##
extraUri: ""
## Metrics exporter container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
## Prometheus Exporter service configuration
##
service:
## Annotations for Prometheus Exporter pods. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.service.port }}"
prometheus.io/path: "/metrics"
type: ClusterIP
port: 9216
## Metrics exporter liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
##
livenessProbe:
enabled: true
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 1
## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator
## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
##
serviceMonitor:
## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
enabled: true
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
## Specify the interval at which metrics should be scraped
##
interval: 30s
## Specify the timeout after which the scrape is ended
##
# scrapeTimeout: 30s
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
additionalLabels: {}
## Custom PrometheusRule to be defined
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
##
prometheusRule:
enabled: false
additionalLabels: {}
## Specify the namespace where Prometheus Operator is running
##
# namespace: monitoring
## Define individual alerting rules as required
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
##
rules: {}
Installing prometheus using the "prometheus-community/kube-prometheus-stack" helm chart could be quite an extensive topic in itself considering the fact that it has a lot of configurable options.
As the helm chart comes with "prometheus operator", we have used PodMonitor and/or ServiceMonitor CRD's as they provide far more configuration options. Here's some documentation around that.
We've installed it with setting "prometheus.prometheusSpec.serviceMonitorSelector.matchLabels" with a label value. Something like this
serviceMonitorSelector:
matchLabels:
monitoring-platform: core-prometheus
As for mongodb helm chart, install it "metrics.enabled=true", "metrics.serviceMonitor.enabled=true" & "metrics.serviceMonitor.additionalLabels" set to value similar to the label defined in prometheus serviceMonitorSelector (monitoring-platform: core-prometheus in this case). Something like this:
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
monitoring-platform: core-prometheus
This would enable prometheus scrape metrics from mongodb and subsequently show up in Grafana.
grafana-mongodb-dashboard
when you deploy kube-prometheus-stack with helm it will have the default label value of 'release: <your-mongodb-helm-release-name>'.
So, On MongoDB, you need to set this label value to "metrics.serviceMonitor.additionalLabels".
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: <your-mongodb-helm-release-name>
I tried to install nextcloud on a Linode k8s managed cluster with helm. The installation process was no problem. Everything worked fine. My nextcloud instance is tls encrypted, but when I try to log in, nothing happens. In the console of the browser I only get the information:
Refused to send form data to 'http://cloud.my-domain.io/' because it violates the following Content Security Policy directive: "form-action 'self'".
I figured out, that this has probably to do with the fact that nextcloud has a problem with the reverse proxy of my k8s Nginx ingress. I tried out to solve the problem by adding 'overwriteprotocol' => 'https' to my config.php. Then the error message disappears, but I didn't get forward to the account page, I still see only the login page.
I also tried out to use the Nginx ingress of the nextcloud helm chart itself. But that doesn't work at all. When I try to access nextcloud within the browser I only get the information, that the instance is unsafe. No login page, no nextcloud page at all. Just the blank browser page.
In the end, I tried to use nextcloud without any tls, and that works quite fine. But of course, that's not what I want. I wanna have a secure connection to nextcloud.
I have no ideas what else I can do. What I'm doing wrong? I really hope that someone can help me to solve the problem. Thank's for your help!
That's my helm values.yaml:
## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
repository: nextcloud
tag: 19.0.3-apache
pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
nameOverride: ""
fullnameOverride: ""
# Number of replicas to be deployed
replicaCount: 1
## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
enabled: false
# metadata:
# annotations:
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/proxy-body-size: 4G
# kubernetes.io/tls-acme: "true"
# certmanager.k8s.io/cluster-issuer: letsencrypt-prod
# nginx.ingress.kubernetes.io/server-snippet: |-
# server_tokens off;
# proxy_hide_header X-Powered-By;
# rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
# rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
# location = /.well-known/carddav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /.well-known/caldav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /robots.txt {
# allow all;
# log_not_found off;
# access_log off;
# }
# location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
# deny all;
# }
# location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
# deny all;
# }
# tls:
# - secretName: wa-stack-nextcloud-tls
# hosts:
# - cloud.my-domain.io
# labels: {}
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
# postStartCommand: []
# preStopCommand: []
nextcloud:
host: cloud.my-domain.io
# username: admin
# password: changeme
# Use an existing secret
existingSecret:
enabled: true
secretName: nextcloud-secret
# for initiator
usernameKey: username
passwordKey: password
# secretName: nameofsecret
# usernameKey: username
# passwordKey: password
# smtpUsernameKey: smtp_username
# smtpPasswordKey: smtp_password
update: 0
datadir: /var/www/html/data
tableprefix: wa
persistence:
subPath:
mail:
enabled: false
fromAddress: user
domain: domain.com
smtp:
host: domain.com
secure: ssl
port: 465
authtype: LOGIN
name: user
password: pass
# PHP Configuration files
# Will be injected in /usr/local/etc/php/conf.d for apache image and in /usr/local/etc/php-fpm.d when nginx.enabled: true
phpConfigs: {}
# Default config files
# IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
# Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/16.0/apache/config
defaultConfigs:
# To protect /var/www/html/config
.htaccess: true
# Redis default configuration
redis.config.php: true
# Apache configuration for rewrite urls
apache-pretty-urls.config.php: true
# Define APCu as local cache
apcu.config.php: true
# Apps directory configs
apps.config.php: true
# Used for auto configure database
autoconfig.php: true
# SMTP default configuration
smtp.config.php: true
# Extra config files created in /var/www/html/config/
# ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
configs: {}
# For example, to use S3 as primary storage
# ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
#
# configs:
# s3.config.php: |-
# <?php
# $CONFIG = array (
# 'objectstore' => array(
# 'class' => '\\OC\\Files\\ObjectStore\\S3',
# 'arguments' => array(
# 'bucket' => 'my-bucket',
# 'autocreate' => true,
# 'key' => 'xxx',
# 'secret' => 'xxx',
# 'region' => 'us-east-1',
# 'use_ssl' => true
# )
# )
# );
## Strategy used to replace old pods
## IMPORTANT: use with care, it is suggested to leave as that for upgrade purposes
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
strategy:
type: Recreate
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
##
## Extra environment variables
extraEnv:
# - name: SOME_SECRET_ENV
# valueFrom:
# secretKeyRef:
# name: nextcloud
# key: secret_key
# Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
# to NextCloud pods in Kubernetes. This can then be configured in External Storage
extraVolumes:
# - name: nfs
# nfs:
# server: "10.0.0.1"
# path: "/nextcloud_data"
# readOnly: false
extraVolumeMounts:
# - name: nfs
# mountPath: "/legacy_data"
nginx:
## You need to set an fpm version of the image for nextcloud if you want to use nginx!
enabled: false
image:
repository: nginx
tag: alpine
pullPolicy: IfNotPresent
config:
# This generates the default nginx config as per the nextcloud documentation
default: true
# custom: |-
# worker_processes 1;..
resources: {}
internalDatabase:
enabled: false
name: nextcloud
##
## External database configuration
##
externalDatabase:
enabled: true
## Supported database engines: mysql or postgresql
type: mysql
## Database host
host: maria-db-mariadb-primary
## Database user
# user: wa-cloud
# Database password
# password:
## Database name
database: wa-cloud
## Use a existing secret
existingSecret:
enabled: true
secretName: mariadb-secret
usernameKey: db-username
passwordKey: mariadb-password
##
## MariaDB chart configuration
##
mariadb:
## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
enabled: false
db:
name: nextcloud
user: nextcloud
password: changeme
replication:
enabled: false
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
master:
persistence:
enabled: false
# storageClass: ""
accessMode: ReadWriteOnce
size: 8Gi
postgresql:
enabled: false
global:
postgresql:
postgresqlUsername: nextcloud
postgresqlPassword: changeme
postgresqlDatabase: nextcloud
persistence:
enabled: false
# storageClass: ""
redis:
enabled: true
usePassword: false
password: ''
## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#webcron
##
cronjob:
enabled: true
# Nexcloud image is used as default but only curl is needed
image: {}
# repository: nextcloud
# tag: 16.0.3-apache
# pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
# Every 5 minutes
# Note: Setting this to any any other value than 5 minutes might
# cause issues with how nextcloud background jobs are executed
schedule: "*/5 * * * *"
annotations: {}
# Set curl's insecure option if you use e.g. self-signed certificates
curlInsecure: false
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
# If not set, nextcloud deployment one will be set
# resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# If not set, nextcloud deployment one will be set
# nodeSelector: {}
# If not set, nextcloud deployment one will be set
# tolerations: []
# If not set, nextcloud deployment one will be set
# affinity: {}
service:
type: ClusterIP
port: 8080
loadBalancerIP: nil
nodePort: nil
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
# Nextcloud Data (/var/www/html)
enabled: true
annotations: {}
## nextcloud data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "linode-block-storage"
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
accessMode: ReadWriteOnce
size: 20Gi
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 30
successThreshold: 1
## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
enabled: false
cputhreshold: 60
minPods: 1
maxPods: 10
nodeSelector: {}
tolerations: []
affinity: {}
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
replicaCount: 1
# The metrics exporter needs to know how you serve Nextcloud either http or https
https: true
timeout: 5s
image:
repository: xperimental/nextcloud-exporter
tag: v0.3.0
pullPolicy: IfNotPresent
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
## Metrics exporter pod Annotation and Labels
# podAnnotations: {}
# podLabels: {}
service:
type: ClusterIP
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9205"
labels: {}
And that's my ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 4G
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |-
server_tokens off;
proxy_hide_header X-Powered-By;
rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
deny all;
}
name: wa-stack-cloud-ingress-nginx
namespace: business
spec:
tls:
- hosts:
- cloud.my-domain.io
secretName: wa-cloud-tls
rules:
- host: cloud.my-domain.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud
port:
number: 8080
I have to install a sonarqube helm chart with postgresql persistence pointing to a external database. This database server is already being used and the chart is configured as below (IP and password changed for security reasons). My ideia is to create a sonarDB database and install the chart. Would it be safe or there would be a risk?
# Default values for sonarqube.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
# This will use the default deployment strategy unless it is overriden
deploymentStrategy: {}
image:
repository: sonarqube
tag: 7.9.1-community
# If using a private repository, the name of the imagePullSecret to use
# pullSecret: my-repo-secret
# Set security context for sonarqube pod
securityContext:
fsGroup: 999
# Settings to configure elasticsearch host requirements
elasticsearch:
configureNode: true
bootstrapChecks: true
service:
type: ClusterIP
externalPort: 9000
internalPort: 9000
labels:
annotations: {}
# May be used in example for internal load balancing in GCP:
# cloud.google.com/load-balancer-type: Internal
# loadBalancerSourceRanges:
# - 0.0.0.0/0
# loadBalancerIP: 1.2.3.4
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- name: sonar.organization.com
# default paths for "/" and "/*" will be added
path: /
# If a different path is defined, that path and {path}/* will be added to the ingress resource
# path: /sonarqube
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# This property allows for reports up to a certain size to be uploaded to SonarQube
# nginx.ingress.kubernetes.io/proxy-body-size: "8m"
# Additional labels for Ingress manifest file
# labels:
# traffic-type: external
# traffic-type: internal
tls: []
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
# hostnames:
# - "example.com"
# - "www.example.com"
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 30
failureThreshold: 6
# If an ingress *path* other than the root (/) is defined, it should be reflected here
# A trailing "/" must be included
sonarWebContext: /
# sonarWebContext: /sonarqube/
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 30
# If an ingress *path* other than the root (/) is defined, it should be reflected here
# A trailing "/" must be included
sonarWebContext: /
# sonarWebContext: /sonarqube/
# Set extra env variables. Like proxy settings.
extraEnv: {}
# If an ingress *path* is defined, it should be reflected here
# sonar.web.context: /sonarqube
# Set annotations for pods
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
enabled: false
## Set annotations on pvc
annotations: {}
## Specify an existing volume claim instead of creating a new one.
## When using this option all following options like storageClass, accessMode and size are ignored.
#existingClaim: gke-homolog-sonarqube
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
# List of plugins to install.
# For example:
plugins:
install:
- "https://github.com/sleroy/sonar-slack-notifier-plugin/releases/download/2.5/cks-slack-notifier-2.5.jar"
- "https://repo1.maven.org/maven2/org/sonarsource/java/sonar-java-plugin/5.14.0.18788/sonar-java-plugin-5.14.0.18788.jar"
#plugins:
#install: []
# initContainerImage: alpine:3.10.3
# deleteDefaultPlugins: true
#resources: {}
# We allow the plugins init container to have a separate resources declaration because
# the initContainer does not take as much resources.
# A custom sonar.properties file can be provided via dictionary.
# For example:
# sonarProperties:
# sonar.forceAuthentication: true
# sonar.security.realm: LDAP
# ldap.url: ldaps://organization.com
# Additional sonar properties to load from a secret with a key "secret.properties" (must be a string)
# sonarSecretProperties:
# Kubernetes secret that contains the encryption key for the sonarqube instance.
# The secret must contain the key 'sonar-secret.txt'.
# The 'sonar.secretKeyPath' property will be set automatically.
# sonarSecretKey: "settings-encryption-secret"
customCerts:
## Enable to override the default cacerts with your own one
enabled: false
secretName: my-cacerts
## Configuration value to select database type
## Option to use "postgresql" or "mysql" database type, by default "postgresql" is chosen
## Set the "enable" field to true of the database type you select (if you want to use internal database) and false of the one you don't select
#database:
# type: "postgresql"
## Configuration values for postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
# Enable to deploy the PostgreSQL chart
enabled: false
# To use an external PostgreSQL instance, set enabled to false and uncomment
# the line below:
postgresServer: "11.31.76.3"
# To use an external secret for the password for an external PostgreSQL
# instance, set enabled to false and provide the name of the secret on the
# line below:
# postgresPasswordSecret: ""
postgresUser: "application"
postgresPassword: "pass123"
postgresDatabase: "sonarDB"
# Specify the TCP port that PostgreSQL should use
service:
port: 5432
## Configuration values for the mysql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/mysql/README.md
##
mysql:
# Enable to deploy the mySQL chart
enabled: false
# To use an external mySQL instance, set enabled to false and uncomment
# the line below:
# mysqlServer: ""
# To use an external secret for the password for an external mySQL instance,
# set enabled to false and provide the name of the secret on the line below:
# mysqlPasswordSecret: ""
mysqlUser: "sonarUser"
mysqlPassword: "sonarPass"
mysqlDatabase: "sonarDB"
# mysqlParams:
# useSSL: "true"
# Specify the TCP port that mySQL should use
service:
port: 3306
#
# Additional labels to add to the pods:
# podLabels:
# key: value
podLabels: {}
# For compatibility with 8.0 replace by "/opt/sq"
sonarqubeFolder: /opt/sonarqube
If you match the current version of Sonarqube your existing database is using, then I doubt that you'd have a problem. The helm chart out of the box brings in a community edition. So get the correct image tag to use from docker hub.
I need to set up MongoDb on my K8S cluster in Azure and to have data stored in the Azure File Service. I am trying to do so with the helm and the following files:
1. StorageClass (account is in the same resource group as my k8s cluster)
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: azurefilestorage
namespace: mongodb
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: mongodb
values.yaml
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
image:
## Bitnami MongoDB registry
##
registry: docker.io
## Bitnami MongoDB image name
##
repository: bitnami/mongodb
## Bitnami MongoDB image tag
## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
##
tag: 4.0.10-debian-9-r13
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-nami-debugging
debug: false
## Enable authentication
## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
#
usePassword: true
# existingSecret: name-of-existing-secret
## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
# mongodbRootPassword:
## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
# mongodbUsername: username
# mongodbPassword: password
# mongodbDatabase: database
## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
mongodbEnableIPv6: true
## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
mongodbDirectoryPerDB: false
## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
mongodbSystemLogVerbosity: 0
mongodbDisableSystemLog: false
## MongoDB additional command line flags
##
## Can be used to specify command line flags, for example:
##
## mongodbExtraFlags:
## - "--wiredTigerCacheSizeGB=2"
mongodbExtraFlags: []
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## Kubernetes Cluster Domain
clusterDomain: cluster.local
## Kubernetes service type
service:
annotations: {}
type: LoadBalancer
# clusterIP: None
port: 27017
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort: 30123
## Specify the externalIP value ClusterIP service type.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
# externalIPs: []
## Specify the loadBalancerIP value for LoadBalancer service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
##
# loadBalancerIP:
## Setting up replication
## ref: https://github.com/bitnami/bitnami-docker-mongodb#setting-up-a-replication
#
replicaSet:
## Whether to create a MongoDB replica set for high availability or not
enabled: true
useHostnames: true
## Name of the replica set
##
name: rs0
## Key used for replica set authentication
##
# key: key
## Number of replicas per each node type
##
replicas:
secondary: 1
arbiter: 1
## Pod Disruption Budget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
pdb:
minAvailable:
primary: 1
secondary: 1
arbiter: 1
# Annotations to be added to MongoDB pods
podAnnotations: {}
# Additional pod labels to apply
podLabels: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
## Pod priority
## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: ""
## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
## Affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
## Tolerations
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
## updateStrategy for MongoDB Primary, Secondary and Arbitrer statefulsets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
##
# existingClaim:
## The path the volume will be mounted at, useful when using different
## MongoDB images.
##
mountPath: /bitnami/mongodb
## The subdirectory of the volume to mount to, useful in dev environments
## and one PV for multiple services.
##
subPath: ""
## mongodb data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: azurefilestorage
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
# Expose mongodb via ingress. This is possible if using nginx-ingress
# https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
ingress:
enabled: false
annotations: {}
labels: {}
paths:
- /
hosts: []
tls:
- secretName: tls-cert
hosts: []
## Configure the options for init containers to be run before the main app containers
## are started. All init containers are run sequentially and must exit without errors
## for the next one to be started.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
# extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
# Define custom config map with init scripts
initConfigMap: {}
# name: "init-config-map"
# Entries for the MongoDB config file
configmap:
# # Where and how to store data.
# storage:
# dbPath: /opt/bitnami/mongodb/data/db
# journal:
# enabled: true
# #engine:
# #wiredTiger:
# # where to write logging data.
# systemLog:
# destination: file
# logAppend: true
# path: /opt/bitnami/mongodb/logs/mongodb.log
# # network interfaces
# net:
# port: 27017
# bindIp: 0.0.0.0
# unixDomainSocket:
# enabled: true
# pathPrefix: /opt/bitnami/mongodb/tmp
# # replica set options
# #replication:
# # replSetName: replicaset
# # process management options
# processManagement:
# fork: false
# pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
# # set parameter options
# setParameter:
# enableLocalhostAuthBypass: true
# # security options
# security:
# authorization: enabled
# #keyFile: /opt/bitnami/mongodb/conf/keyfile
## Prometheus Exporter / Metrics
##
metrics:
enabled: true
image:
registry: docker.io
repository: forekshub/percona-mongodb-exporter
tag: latest
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## String with extra arguments to the metrics exporter
## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
extraArgs: ""
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
## Metrics exporter liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 1
## Metrics exporter pod Annotation
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9216"
## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator
## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
serviceMonitor:
## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
enabled: true
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
additionalLabels: {}
## Specify Metric Relabellings to add to the scrape endpoint
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
# relabellings:
alerting:
## Define individual alerting rules as required
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
rules: {}
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Prometheus Rules to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
additionalLabels: {}
If I uncomment the line 211 with:
storageClass: azurefilestorage
and hit
helm upgrade mongodb-dev stable/mongodb -f dev_values.yaml
I am getting the following error:
Error: UPGRADE FAILED: StatefulSet.apps "mongodb-dev-primary" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden. && StatefulSet.apps "mongodb-dev-secondary" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.
Any hint where is the problem and how to have mongodb with azure file service connected?
well, this has nothing to do with azure files. the error tells you that stateful sets cant be updated. so you'd need to delete it and create it from scratch
I'm new to Helm and Kubernetes and cannot figure out how to use helm install --name kibana --namespace logging stable/kibana with the Logtrail plugin enabled. I can see there's an option in the values.yaml file to enable plugins during installation but I cannot figure out how to set it.
I've tried this without success:
helm install --name kibana --namespace logging stable/kibana \
--set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
Update:
As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the helm install. And here's how to set it. In your custom values file set the following:
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
After that the full content of your custom values file should look similar to this:
image:
repository: "docker.elastic.co/kibana/kibana-oss"
tag: "6.5.4"
pullPolicy: "IfNotPresent"
commandline:
args: []
env: {}
# All Kibana configuration options are adjustable via env vars.
# To adjust a config option to an env var uppercase + replace `.` with `_`
# Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
#
# ELASTICSEARCH_URL: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
# server.port: 5601
# logging.verbose: "true"
# server.defaultRoute: "/app/kibana"
deployment:
annotations: {}
service:
type: ClusterIP
externalPort: 443
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
#
## LoadBalancer IP if service.type is LoadBalancer
## Default: nil
##
# loadBalancerIP: 10.2.2.2
annotations: {}
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
labels: {}
## Label example: show service URL in `kubectl cluster-info`
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
ingress:
enabled: false
# hosts:
# - kibana.localhost.localdomain
# - localhost.localdomain/kibana
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# If set and create is false, the service account must be existing
name:
livenessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 5
# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false
extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - --resource=uri=/*
# - --discovery-url=https://discovery-url
# - --client-id=client
# - --client-secret=secret
# - --listen=0.0.0.0:5602
# - --upstream-url=http://127.0.0.1:5601
# ports:
# - name: web
# containerPort: 9090
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
timeout: 60
xpackauth:
enabled: false
username: myuser
password: mypass
dashboards: {}
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
# set to true to enable plugins installation
enabled: false
# set to true to remove all kibana plugins before installation
reset: false
# Use <plugin_name,version,url> to add/upgrade plugin
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
# - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
# - other_plugin
persistentVolumeClaim:
# set to true to use pvc
enabled: false
# set to true to use you own pvc
existingClaim: false
annotations: {}
accessModes:
- ReadWriteOnce
size: "5Gi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# default security context
securityContext:
enabled: false
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
And the last thing you should do is add this ConfigMap resource to Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logtrail
namespace: logging
data:
logtrail.json: |
{
"version" : 1,
"index_patterns" : [
{
"es": {
"default_index": "logstash-*"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"default_time_range_in_days" : 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"fields" : {
"mapping" : {
"timestamp" : "#timestamp",
"hostname" : "kubernetes.host",
"program": "kubernetes.pod_name",
"message": "log"
},
"message_format": "{{{log}}}"
},
"color_mapping" : {
}
}]
}
After that you're ready to helm install with the values file specified via the -f flag.
Getting input with --set that matches to what the example in the values file has is a bit tricky. Following the example we want the values to be:
plugins:
enabled: true
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
The plugin.values here is tricky because it is an array, which means you need to enclose with {}. And the relevant entry contains commas, which have to be escaped with backslash. To get it to match you can use:
helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}
If you add --dry-run --debug then you can see what the computed values are for any command you run, including with --set, so this can help check the match. This kind of value is easier to set with a custom values file referenced with -f as it avoids having to work out how the --set evaluates to values.