I'm trying to send RabbitMQ logs to both console and file, I'm using RabbitMQ Operator to run cluster and this definition.yaml:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmqcluster
spec:
image: rabbitmq:3.8.9-management
replicas: 3
imagePullSecrets:
- name: rabbitmq-cluster-registry-access
service:
type: ClusterIP
persistence:
storageClassName: rbd
storage: 20Gi
resources:
requests:
cpu: 2000m
memory: 6Gi
limits:
cpu: 2000m
memory: 6Gi
rabbitmq:
additionalConfig: |
log.console = true
log.console.level = debug
log.file = rabbit.log
log.dir = /var/lib/rabbitmq/
log.file.level = debug
log.file = true
additionalPlugins:
- rabbitmq_top
- rabbitmq_shovel
- rabbitmq_management
- rabbitmq_peer_discovery_k8s
- rabbitmq_stomp
- rabbitmq_prometheus
- rabbitmq_peer_discovery_consul
after running on the cluster, this is console logs:
## ## RabbitMQ 3.8.9
## ##
########## Copyright (c) 2007-2020 VMware, Inc. or its affiliates.
###### ##
########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: <stdout>
Config file(s): /etc/rabbitmq/rabbitmq.conf
/etc/rabbitmq/conf.d/default_user.conf
Starting broker...2021-07-18 07:22:26.203 [info] <0.272.0>
node : rabbit#rabbitmqcluster-server-0.rabbitmqcluster-nodes.rabbitmq-system
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.conf
: /etc/rabbitmq/conf.d/default_user.conf
cookie hash : shP20jU/vTqNF4lW9g0tqg==
log(s) : <stdout>
database dir : /var/lib/rabbitmq/mnesia/rabbit#rabbitmqcluster-server-0.rabbitmqcluster-nodes.rabbitmq-system
and after starting the cluster there is no log file in the path I specified and the crash.log is in the directory:
$ls
crash.log
what should I do?
Related
I am creating/configuring a service account(SA) in the helm chart.
It is created(in the k8s namespace as a secret), however, when I try to use its token in a HTTP/REST API call e.g. get folders, it says:
"invalid API key"
The idea is whenever Grafana is installed from scratch, an SA should be provisioned. This SA token will be then used for accessing the REST API.
Chart.yaml
apiVersion: v2
name: kraken-observability-stack
version: 0.1.0
#We don't have a built-in-house app so we dont set
#appVersion: 0.1.0
kubeVersion: "^1.20.0-0"
description: The kraken observability stack for collecting and visualizing metrics, logs and traces related to CI pipelines.
home: https://docs.net/
dependencies:
- name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.50.x
- name: mimir-distributed
repository: https://grafana.github.io/helm-charts
version: 3.2.x
- name: loki-distributed
repository: https://grafana.github.io/helm-charts
version: 0.68.x
- name: tempo-distributed
repository: https://grafana.github.io/helm-charts
version: 1.0.x
- name: opentelemetry-collector
repository: https://open-telemetry.github.io/opentelemetry-helm-charts
version: 0.47.x
(partial) values.yaml
grafana:
testFramework:
enabled: false
resources:
limits:
#maybe we shouldn't set cpu limits to avoid overbooking of resources.
#cpu: 1000m
memory: 1Gi
requests:
memory: 200Mi
cpu: 200m
grafana.ini:
force_migration: true
data_proxy:
timeout: 60s
#feature_toggles:
# enable: tempoServiceGraph,tempoSearch,tempoBackendSearch,tempoApmTable
auth:
login_cookie_name: "kraken_grafana_session"
auth.anonymous:
enabled: true
org_name: 'CICDS Pipelines User'
org_role: 'Viewer'
analytics:
reporting_enabled: false
check_for_updates: false
check_for_plugin_updates: false
enable_feedback_links: false
log:
level: warn
mode: console
plugins:
enable_alpha: true
app_tls_skip_verify_insecure: true
allow_loading_unsigned_plugins: true
#podAnnotations for grafana to expose its own metrics
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/schema: "http"
prometheus.io/port: "http"
prometheus.io/path: "/metrics"
rbac:
#disable Create and use RBAC resources
create: false
#disable Create PodSecurityPolicy (we don't have privileges for that)
pspEnabled: false
#disable to enforce AppArmor in created PodSecurityPolicy
pspUseAppArmor: false
serviceAccount:
create: true
name: grafana-init-sa
labels: {kraken-init}
replicas: 3
image:
#repository: docker-virtual.repository.net/grafana/grafana
repository: grafana/grafana
downloadDashboardsImage:
repository: docker-virtual.repository.net/curlimages/curl
tag: 7.85.0
pullPolicy: IfNotPresent
persistence:
type: statefulset
enabled: true
initChownData:
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: false
#image:
# repository: docker-virtual.repository.net/busybox
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
adminPassword: changeit
# Use an existing secret for the admin user.
# grafana-admin-credentials name is reserved by the operator and thus -creds
admin:
existingSecret: "grafana-admin-user"
userKey: ADMIN_USER
passwordKey: ADMIN_PASSWORD
env:
HTTP_PROXY: http://p985nst:p985nst#proxyvip-se.sbcore.net:8080/
HTTPS_PROXY: http://p985nst:p985nst#proxyvip-se.sbcore.net:8080/
NO_PROXY: .cluster.local,.net,.sbcore.net,.svc,10.0.0.0/8,172.30.0.0/16,localhost
# ## Pass the plugins you want installed as a list.
# ##
# plugins:
# - digrich-bubblechart-panel
# - grafana-clock-panel
# - grafana-piechart-panel
# - natel-discrete-panel
extraSecretMounts:
- name: loki-credentials-secret-mount
secretName: loki-credentials
defaultMode: 0440
mountPath: /etc/secrets/.loki_credentials
readOnly: true
My yaml file likes :
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: m1
spec:
domain:
cpu:
cores: 4
devices:
disks:
- name: harddrive
disk: {}
- name: cloudinitdisk
disk: {}
interfaces:
- name: ovs-net
bridge: {}
- name: default
masquerade: {}
resources:
requests:
memory: 8G
volumes:
- name: harddrive
containerDisk:
image: 1.1.1.1:8888/redhat/redhat79:latest
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo 1 > /opt/1.txt
networks:
- name: ovs-net
multus:
networkName: ovs-vlan-100
- name: default
pod: {}
VMI is running and I login the vm , nothing is in directory '/opt'; I find a disk sdb ,I mount sdb to /mnt, I can see file 'userdata', and the content in 'userdata' is right
I don't know where I did wrong
K8S 1.22.10
I also tried the other two methods
1)
cloudInitNoCloud:
userData: |
bootcmd:
- touch /opt/1.txt
runcmd:
- touch /opt/2.txt
cloudInitNoCloud:
secretRef:
name: my-vmi-secret
I hope the cloudinitnocloud work, and it can run my command
I find the problem, the docker image that I used doesn't install cloud* package
Kubevirt offical doesn't mention it, I think I can use it directly.
I am currently using Loki to store logs generated by my applications from EKS Fargate. Sidecar pattern with promtail is used to scrape logs. Single Loki pod is used and S3 is configured as a destination to store logs. It works nicely as expected. However, when I tested the availability of the logging system by deleting pods, I discovered that if Loki’s pod was deleted, some logs would be missing (range around 20 mins before the pod was deleted to the time the pod was deleted) even after the pod restarted.
To solve this problem, I tried to use EFS as the persistent volume of Loki’ pod, mounting the path /loki. The whole process is followed by this article (https://aws.amazon.com/blogs/aws/new-aws-fargate-for-amazon-eks-now-supports-amazon-efs/). But I have got an error from the Loki pod with msg "error running loki" err="mkdir /loki/compactor: permission denied”
Therefore, I have 2 questions in my mind:
Should I use EFS as a solution for log backup in my case?
Why did I get a permission denied inside the pod, any ways to solve this problem?
My Loki-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
# grpc_listen_port: 9096
ingester:
wal:
enabled: true
dir: /loki/wal
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
# final_sleep: 0s
chunk_idle_period: 3m
chunk_retain_period: 30s
max_transfer_retries: 0
chunk_target_size: 1048576
schema_config:
configs:
- from: 2020-05-15
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/index_cache
shared_store: s3
aws:
bucketnames: bucketnames
endpoint: s3.us-west-2.amazonaws.com
region: us-west-2
access_key_id: access_key_id
secret_access_key: secret_access_key
sse_encryption: true
compactor:
working_directory: /loki/compactor
shared_store: s3
compaction_interval: 5m
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 48h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: true
retention_period: 96h
querier:
query_ingesters_within: 0
analytics:
reporting_enabled: false
Deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: fargate-api-dev
name: dev-loki
spec:
selector:
matchLabels:
app: dev-loki
template:
metadata:
labels:
app: dev-loki
spec:
volumes:
- name: loki-config
configMap:
name: dev-loki-config
- name: dev-loki-efs-pv
persistentVolumeClaim:
claimName: dev-loki-efs-pvc
containers:
- name: loki
image: loki:2.6.1
args:
- -print-config-stderr=true
- -config.file=/tmp/loki.yaml
resources:
limits:
memory: "500Mi"
cpu: "200m"
ports:
- containerPort: 3100
volumeMounts:
- name: dev-loki-config
mountPath: /tmp
readOnly: false
- name: dev-loki-efs-pv
mountPath: /loki
Promtail-config.yaml
server:
log_level: info
http_listen_port: 9080
clients:
- url: http://loki.com/loki/api/v1/push
positions:
filename: /run/promtail/positions.yaml
scrape_configs:
- job_name: api-log
static_configs:
- targets:
- localhost
labels:
job: apilogs
pod: ${POD_NAME}
__path__: /var/log/*.log
I had a similar issue using EFS as volume to store the logs and I found this solution https://github.com/grafana/loki/issues/2018#issuecomment-1030221498
Basically loki container by it's own is not able to create a directory to start working, so we used a initcotainer to do it for it.
This solution worked like a charm for.
I tried to install nextcloud on a Linode k8s managed cluster with helm. The installation process was no problem. Everything worked fine. My nextcloud instance is tls encrypted, but when I try to log in, nothing happens. In the console of the browser I only get the information:
Refused to send form data to 'http://cloud.my-domain.io/' because it violates the following Content Security Policy directive: "form-action 'self'".
I figured out, that this has probably to do with the fact that nextcloud has a problem with the reverse proxy of my k8s Nginx ingress. I tried out to solve the problem by adding 'overwriteprotocol' => 'https' to my config.php. Then the error message disappears, but I didn't get forward to the account page, I still see only the login page.
I also tried out to use the Nginx ingress of the nextcloud helm chart itself. But that doesn't work at all. When I try to access nextcloud within the browser I only get the information, that the instance is unsafe. No login page, no nextcloud page at all. Just the blank browser page.
In the end, I tried to use nextcloud without any tls, and that works quite fine. But of course, that's not what I want. I wanna have a secure connection to nextcloud.
I have no ideas what else I can do. What I'm doing wrong? I really hope that someone can help me to solve the problem. Thank's for your help!
That's my helm values.yaml:
## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
repository: nextcloud
tag: 19.0.3-apache
pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
nameOverride: ""
fullnameOverride: ""
# Number of replicas to be deployed
replicaCount: 1
## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
enabled: false
# metadata:
# annotations:
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/proxy-body-size: 4G
# kubernetes.io/tls-acme: "true"
# certmanager.k8s.io/cluster-issuer: letsencrypt-prod
# nginx.ingress.kubernetes.io/server-snippet: |-
# server_tokens off;
# proxy_hide_header X-Powered-By;
# rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
# rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
# location = /.well-known/carddav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /.well-known/caldav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /robots.txt {
# allow all;
# log_not_found off;
# access_log off;
# }
# location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
# deny all;
# }
# location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
# deny all;
# }
# tls:
# - secretName: wa-stack-nextcloud-tls
# hosts:
# - cloud.my-domain.io
# labels: {}
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
# postStartCommand: []
# preStopCommand: []
nextcloud:
host: cloud.my-domain.io
# username: admin
# password: changeme
# Use an existing secret
existingSecret:
enabled: true
secretName: nextcloud-secret
# for initiator
usernameKey: username
passwordKey: password
# secretName: nameofsecret
# usernameKey: username
# passwordKey: password
# smtpUsernameKey: smtp_username
# smtpPasswordKey: smtp_password
update: 0
datadir: /var/www/html/data
tableprefix: wa
persistence:
subPath:
mail:
enabled: false
fromAddress: user
domain: domain.com
smtp:
host: domain.com
secure: ssl
port: 465
authtype: LOGIN
name: user
password: pass
# PHP Configuration files
# Will be injected in /usr/local/etc/php/conf.d for apache image and in /usr/local/etc/php-fpm.d when nginx.enabled: true
phpConfigs: {}
# Default config files
# IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
# Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/16.0/apache/config
defaultConfigs:
# To protect /var/www/html/config
.htaccess: true
# Redis default configuration
redis.config.php: true
# Apache configuration for rewrite urls
apache-pretty-urls.config.php: true
# Define APCu as local cache
apcu.config.php: true
# Apps directory configs
apps.config.php: true
# Used for auto configure database
autoconfig.php: true
# SMTP default configuration
smtp.config.php: true
# Extra config files created in /var/www/html/config/
# ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
configs: {}
# For example, to use S3 as primary storage
# ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
#
# configs:
# s3.config.php: |-
# <?php
# $CONFIG = array (
# 'objectstore' => array(
# 'class' => '\\OC\\Files\\ObjectStore\\S3',
# 'arguments' => array(
# 'bucket' => 'my-bucket',
# 'autocreate' => true,
# 'key' => 'xxx',
# 'secret' => 'xxx',
# 'region' => 'us-east-1',
# 'use_ssl' => true
# )
# )
# );
## Strategy used to replace old pods
## IMPORTANT: use with care, it is suggested to leave as that for upgrade purposes
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
strategy:
type: Recreate
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
##
## Extra environment variables
extraEnv:
# - name: SOME_SECRET_ENV
# valueFrom:
# secretKeyRef:
# name: nextcloud
# key: secret_key
# Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
# to NextCloud pods in Kubernetes. This can then be configured in External Storage
extraVolumes:
# - name: nfs
# nfs:
# server: "10.0.0.1"
# path: "/nextcloud_data"
# readOnly: false
extraVolumeMounts:
# - name: nfs
# mountPath: "/legacy_data"
nginx:
## You need to set an fpm version of the image for nextcloud if you want to use nginx!
enabled: false
image:
repository: nginx
tag: alpine
pullPolicy: IfNotPresent
config:
# This generates the default nginx config as per the nextcloud documentation
default: true
# custom: |-
# worker_processes 1;..
resources: {}
internalDatabase:
enabled: false
name: nextcloud
##
## External database configuration
##
externalDatabase:
enabled: true
## Supported database engines: mysql or postgresql
type: mysql
## Database host
host: maria-db-mariadb-primary
## Database user
# user: wa-cloud
# Database password
# password:
## Database name
database: wa-cloud
## Use a existing secret
existingSecret:
enabled: true
secretName: mariadb-secret
usernameKey: db-username
passwordKey: mariadb-password
##
## MariaDB chart configuration
##
mariadb:
## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
enabled: false
db:
name: nextcloud
user: nextcloud
password: changeme
replication:
enabled: false
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
master:
persistence:
enabled: false
# storageClass: ""
accessMode: ReadWriteOnce
size: 8Gi
postgresql:
enabled: false
global:
postgresql:
postgresqlUsername: nextcloud
postgresqlPassword: changeme
postgresqlDatabase: nextcloud
persistence:
enabled: false
# storageClass: ""
redis:
enabled: true
usePassword: false
password: ''
## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#webcron
##
cronjob:
enabled: true
# Nexcloud image is used as default but only curl is needed
image: {}
# repository: nextcloud
# tag: 16.0.3-apache
# pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
# Every 5 minutes
# Note: Setting this to any any other value than 5 minutes might
# cause issues with how nextcloud background jobs are executed
schedule: "*/5 * * * *"
annotations: {}
# Set curl's insecure option if you use e.g. self-signed certificates
curlInsecure: false
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
# If not set, nextcloud deployment one will be set
# resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# If not set, nextcloud deployment one will be set
# nodeSelector: {}
# If not set, nextcloud deployment one will be set
# tolerations: []
# If not set, nextcloud deployment one will be set
# affinity: {}
service:
type: ClusterIP
port: 8080
loadBalancerIP: nil
nodePort: nil
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
# Nextcloud Data (/var/www/html)
enabled: true
annotations: {}
## nextcloud data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "linode-block-storage"
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
accessMode: ReadWriteOnce
size: 20Gi
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 30
successThreshold: 1
## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
enabled: false
cputhreshold: 60
minPods: 1
maxPods: 10
nodeSelector: {}
tolerations: []
affinity: {}
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
replicaCount: 1
# The metrics exporter needs to know how you serve Nextcloud either http or https
https: true
timeout: 5s
image:
repository: xperimental/nextcloud-exporter
tag: v0.3.0
pullPolicy: IfNotPresent
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
## Metrics exporter pod Annotation and Labels
# podAnnotations: {}
# podLabels: {}
service:
type: ClusterIP
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9205"
labels: {}
And that's my ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 4G
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |-
server_tokens off;
proxy_hide_header X-Powered-By;
rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
deny all;
}
name: wa-stack-cloud-ingress-nginx
namespace: business
spec:
tls:
- hosts:
- cloud.my-domain.io
secretName: wa-cloud-tls
rules:
- host: cloud.my-domain.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud
port:
number: 8080
I'm new to Helm and Kubernetes and cannot figure out how to use helm install --name kibana --namespace logging stable/kibana with the Logtrail plugin enabled. I can see there's an option in the values.yaml file to enable plugins during installation but I cannot figure out how to set it.
I've tried this without success:
helm install --name kibana --namespace logging stable/kibana \
--set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
Update:
As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the helm install. And here's how to set it. In your custom values file set the following:
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
After that the full content of your custom values file should look similar to this:
image:
repository: "docker.elastic.co/kibana/kibana-oss"
tag: "6.5.4"
pullPolicy: "IfNotPresent"
commandline:
args: []
env: {}
# All Kibana configuration options are adjustable via env vars.
# To adjust a config option to an env var uppercase + replace `.` with `_`
# Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
#
# ELASTICSEARCH_URL: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
# server.port: 5601
# logging.verbose: "true"
# server.defaultRoute: "/app/kibana"
deployment:
annotations: {}
service:
type: ClusterIP
externalPort: 443
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
#
## LoadBalancer IP if service.type is LoadBalancer
## Default: nil
##
# loadBalancerIP: 10.2.2.2
annotations: {}
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
labels: {}
## Label example: show service URL in `kubectl cluster-info`
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
ingress:
enabled: false
# hosts:
# - kibana.localhost.localdomain
# - localhost.localdomain/kibana
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# If set and create is false, the service account must be existing
name:
livenessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 5
# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false
extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - --resource=uri=/*
# - --discovery-url=https://discovery-url
# - --client-id=client
# - --client-secret=secret
# - --listen=0.0.0.0:5602
# - --upstream-url=http://127.0.0.1:5601
# ports:
# - name: web
# containerPort: 9090
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
timeout: 60
xpackauth:
enabled: false
username: myuser
password: mypass
dashboards: {}
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
# set to true to enable plugins installation
enabled: false
# set to true to remove all kibana plugins before installation
reset: false
# Use <plugin_name,version,url> to add/upgrade plugin
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
# - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
# - other_plugin
persistentVolumeClaim:
# set to true to use pvc
enabled: false
# set to true to use you own pvc
existingClaim: false
annotations: {}
accessModes:
- ReadWriteOnce
size: "5Gi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# default security context
securityContext:
enabled: false
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
And the last thing you should do is add this ConfigMap resource to Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logtrail
namespace: logging
data:
logtrail.json: |
{
"version" : 1,
"index_patterns" : [
{
"es": {
"default_index": "logstash-*"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"default_time_range_in_days" : 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"fields" : {
"mapping" : {
"timestamp" : "#timestamp",
"hostname" : "kubernetes.host",
"program": "kubernetes.pod_name",
"message": "log"
},
"message_format": "{{{log}}}"
},
"color_mapping" : {
}
}]
}
After that you're ready to helm install with the values file specified via the -f flag.
Getting input with --set that matches to what the example in the values file has is a bit tricky. Following the example we want the values to be:
plugins:
enabled: true
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
The plugin.values here is tricky because it is an array, which means you need to enclose with {}. And the relevant entry contains commas, which have to be escaped with backslash. To get it to match you can use:
helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}
If you add --dry-run --debug then you can see what the computed values are for any command you run, including with --set, so this can help check the match. This kind of value is easier to set with a custom values file referenced with -f as it avoids having to work out how the --set evaluates to values.