How to correctly write to the config?
Information taken from here
https://altinity.com/blog/integrating-clickhouse-with-ldap-part-one
Operator
https://github.com/Altinity/clickhouse-operator
Unknown setting server: while parsing profile 'ldap' in users configuration file: while loading configuration file '/etc/clickhouse-server/users.xml'
settings:
# to allow scrape metrics via embedded prometheus protocol
prometheus/endpoint: /metrics
prometheus/port: 8888
prometheus/metrics: true
prometheus/events: true
prometheus/asynchronous_metrics: true
ldap_servers/ldap_test/host: host ldap
ldap_servers/ldap_test/port: 389
ldap_servers/ldap_test/bind_dn: DC=passport,DC=local
profiles:
ldap/server: ldap_test
Related
I'm fighting with kibana since few days and I don't overcome to start it on my FreeBSD server.
This is my environment:
FreeBSD 11.1-STABLE
ElasticSearch 5.3.0
Kibana 5.3.0
Logstash 5..
ElasticSearch and Logstash work fine. But I don't overcome to start kibana service.
This is files according to kibana:
kibana.yml file:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are
both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
server.basePath: "/qual/kibana"
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana.log
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
/usr/local/etc/rc.d/kibana:
#!/bin/sh
#
# $FreeBSD: head/textproc/kibana5/files/kibana.in 462830 2018-02-24 14:17:41Z feld $
#
# PROVIDE: kibana
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name=kibana
rcvar=kibana_enable
load_rc_config $name
: ${kibana_enable:="NO"}
: ${kibana_config:="/usr/local/etc/kibana.yml"}
: ${kibana_user:="www"}
: ${kibana_group:="www"}
: ${kibana_log:="/var/log/kibana.log"}
required_files="${kibana_config}"
pidfile="/var/run/${name}/${name}.pid"
start_precmd="kibana_precmd"
procname="/usr/local/bin/node"
command="/usr/sbin/daemon"
command_args="-f -p ${pidfile} env BABEL_DISABLE_CACHE=1 ${procname} /usr/local/www/kibana5/src/cli serve --config ${kibana_config} --log-file ${kibana_log}"
kibana_precmd()
{
if [ ! -d $(dirname ${pidfile}) ]; then
install -d -o ${kibana_user} -g ${kibana_group} $(dirname ${pidfile})
fi
if [ ! -f ${kibana_log} ]; then
install -o ${kibana_user} -g ${kibana_group} -m 640 /dev/null ${kibana_log}
fi
if [ ! -d /usr/local/www/kibana5/optimize ]; then
install -d -o ${kibana_user} -g ${kibana_group} /usr/local/www/kibana5/optimize
fi
}
run_rc_command "$1"
/etc/rc.conf:
kibana_enable="YES"
But when I execute: service kibana start
I get:
root#server:/var/log # service kibana start
Starting kibana.
root#server:/var/log # service kibana status
kibana is not running.
I don't know why ?
Start the service in debug mode
sh -x /usr/local/etc/rc.d/kibana start
find which command is used to start the kibana service. For kibana, the command should be something like /usr/local/bin/node /usr/local/www/kibana6/src/cli serve --config /usr/local/etc/kibana/kibana.yml
Start the process in foreground
It is possible that node is not properly installed or some permission issue.
Problem
Does anyone know how to configure bootstrap.yml to tell Spring Cloud Vault to go to the correct path for k2 v2 and not try other paths first?
Details
I can successfully connect to my Vault, running k2 v2, but Spring Cloud will always try to connect to paths in the vault that don't exist, throwing a 403 on startup.
Status 403 Forbidden [secret/application]: permission denied; nested exception is org.springframework.web.client.HttpClientErrorException$Forbidden: 403 Forbidden
The above path, secret/application, doesn't exist because k2 v2 puts data in the path. For example: secret/data/application.
This isn't a show-stopper because Spring Cloud Vault does check other paths, including the correct one that has the data item in the path, but the fact a meaningless 403 is thrown during startup is like a splinter in my mind.
Ultimately, it does try the correct k2 v2 path
2019-03-18 12:22:46.611 INFO 77685 --- [ restartedMain] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource {name='vault', propertySources=[LeaseAwareVaultPropertySource {name='secret/data/my-app'}
My configuration
spring.cloud.vault:
kv:
enabled: true
backend: secret
profile-separator: '/'
default-context: my-app
application-name: my-app
host: localhost
port: 8200
scheme: http
authentication: TOKEN
token: my-crazy-long-token-string
Thanks for your help!
Add the following lines in your bootstrap.yml, this disables the generic backend
spring.cloud.vault:
generic:
enabled: false
for more information https://cloud.spring.io/spring-cloud-vault/reference/html/#vault.config.backends.generic
In addition to the accepted answer it's important to turn off (or just remove) fail-fast option:
spring.cloud.vault:
fail-fast: false
spring.cloud.vault.generic.enabled is deprecated in spring-cloud 3.0.0, but the 403 error is still there. To disable the warning (by telling spring to use the exact context), this is what I used:
spring:
config:
import: vault://
application:
name: my-application
cloud:
vault:
host: localhost
scheme: http
authentication: TOKEN
token: my-crazy-long-token-string
kv:
default-context: my-application
Other configs were set to default (such as port = 8200, backend = secret, etc.)
I am trying to experiment the Vault HarshiCorp.
Version that I am using is Vault v0.11.0:
Starting log as below
Api Address: https://ldndsr000004893:8200
Cgo: disabled
Cluster Address: https://ldndsr000004893:8201
Listener 1: tcp (addr: "ldndsr000004893:8200", cluster address: "10.75.40.30:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v0.11.0
Version Sha: 87492f9258e0227f3717e3883c6a8be5716bf56
Server configuration as below:
listener "tcp" {
address = "ldndsr000004893:8200"
scheme = "http"
tls_disable = 1
}
#storage "inmem" {
#}
#storage "zookeeper" {
# address = "localhost:2182"
# path = "vault/"
#}
storage "file" {
path = "/app/iag/phoenix/vault/data"
}
# Advertise the non-loopback interface
api_addr = "https://ldndsr000004893:8200"
disable_mlock = true
ui=true
I have input numbers of key value pairs into vault, and was able to retrieve data normally using Vault command line. But certaintly It stopped working and not able to unseal data from both UI and commandline.
UI error :
Any advice on this issue as I am going to use Vault for storing all credential information.
Turns out it was a problem with Vault UI running on chrome browse.
I have to open a new window with incognito windows and it is showing sign in window after I keyed in the token Vault got unsealed
I have an Amazon EC2 instance running and I am trying to set up StatsD+InfluxDB+Grafana. InfluxDB and Grafana work well (and Grafana sees the data from InfluxDB), but I can't manage to get any data from StatsD to InfluxDB.
I have a domain registered, which is pointed to my EC2 instance with an Elastic IP.
What I can see is that:
- I can perfectly interact with the InfluxDB database (including inserting values) when I don't use StatsD
- StatsD seems to be getting the data I randomly generate from Python (I can see it in its logs). It is sent through the port 8125 to StatsD.
- UTC packets sent from StatsD to InfluxDB through port 8086 seem to not be getting to InfluxDB (or not sending....?)
- Port 8086 is open on my AWS security settings for both TCP and UDP
- Port 8125 is open on my AWS security settings for UDP
I am wondering whether some of my settings are wrong, but I don't know what else to try:
InfluxDB configuration file contains:
# hostname = "localhost"
hostname = MYDOMAIN.com
[[udp]]
enabled = true
bind-address = ":8086"
database = "MY_DATABASE"
retention-policy = ""
batch-size = 1000 # will flush if this many points get buffered
batch-pending = 10 # number of batches that may be pending in memory
batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
udp-payload-size = 65536
My StatsD configuration file contains (among other things) the following lines:
{
influxdb: {
/*
host: '127.0.0.1', // InfluxDB host (default 127.0.0.1)
*/
host: 'MYDOMAIN.com', // InfluxDB host (default 127.0.0.1)
port: 8086, // InfluxDB port (default 8086)
database: 'MY_DATABASE', // InfluxDB db instance (required)
username: 'MY_USERNAME', // InfluxDB db username (required)
password: 'MY_PASSWORD', // InfluxDB db password (required)
flush: {
enable: true // enable regular flush strategy (default true)
},
proxy: {
enable: false, // enable the proxy strategy (default false)
suffix: 'raw', // metric name suffix (default 'raw')
flushInterval: 1000
}
},
port: 8125, // statsD port
backends: ['./backends/console'],
debug: true,
legacyNamespace: false
}
As far as I understand, the process is:
Python --> Port 8125 --> StatsD --> Port 8086 --> InfluxDB
Is there a need to use something like Telegraf or statsd-influxdb-backend to connect StatsD and InfluxDB?
I would truly appreciate any helps because I have been trying to set it up for hours and I don't see what could be wrong.
Thanks!
The part of the stack I'm not sure about is your StatsD server. It's probably having a problem posting the data to InfluxDB. If you use Telegraf instead it should "just work". Telegraf can act as a StatsD server (among many other things) and send data to InfluxDB via either UDP or the regular HTTP protocol.
I setup my services to use the spring cloud eureka based config server.
version info: spring cloud 1.0.1.RELEASE
When I set it up as a fixed endpoint, I can see that it gets the right configuration file and that I can access actuator endpoints like health, info etc. so a .../manage/info returns the correct information.
However when I set it up to use discovery, the same actuator endpoints timeout on trying got access them.
In each case the configuration file is retrieved and downloaded (included log file).
Is there an issue with how I setup config server and bookmark service (the service which uses the config server)?
My configuration server setting is as follows:
server:
port: 8888
contextPath: /configurationservice
eureka:
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
instance:
leaseRenewalIntervalInSeconds: 10
statusPageUrlPath: /configurationservice/info
homePageUrlPath: /configurationservice/
healthCheckUrlPath: /configurationservice/health
preferIpAddress: true
spring:
cloud:
config:
server:
native:
searchLocations: file:/Users/larrymitchell/libertas/configserver/configfiles
The service bootstrap.yml settings are:
spring:
profiles:
default: development
active: development
application:
name: bookmarkservice
cloud:
config:
enabled: true # note this needs to be turned on if you wnat the config server to work
# uri: http://localhost:8888/configurationservice
label: 1.0.0
discovery:
enabled: true
serviceId: configurationservice
The application.yml settings are:
# general spring settings
spring:
application:
name: bookmarkservice
profiles:
default: development
active: development
# name of the service
service:
name: bookmarkservice
# embedded web server settings
# some of these are specific to tomcat
server:
port: 9001
# the context path is the part after http:/localhost:8080
contextPath: /bookmarkservice
tomcat:
basedir: target/tomcat
uri-encoding: UTF-8
management:
context-path: /manage
security:
enabled: false
eureka:
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
instance:
statusPageUrlPath: /bookmarkservice/manage/info
homePageUrlPath: /bookmarkservice/manage
healthCheckUrlPath: /bookmarkservice/manage/health
preferIpAddress: true
The startup log for bookmark service is as follows:
2015-06-24 17:52:49.806 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : Created GET request for "http://10.132.1.56:8888/configurationservice/bookmarkservice/development/1.0.0"
2015-06-24 17:52:49.890 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : Setting request Accept header to [application/json, application/*+json]
2015-06-24 17:52:50.439 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : GET request for "http://10.132.1.56:8888/configurationservice/bookmarkservice/development/1.0.0" resulted in 200 (OK)
2015-06-24 17:52:50.441 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : Reading [class org.springframework.cloud.config.environment.Environment] as "application/json;charset=UTF-8" using [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter#2b07e607]
2015-06-24 17:52:50.466 INFO 11234 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource [name='configService', propertySources=[MapPropertySource [name='file:/Users/larrymitchell/libertas/configserver/configfiles/1.0.0/bookmarkservice-development.yml']]]
2015-06-24 17:52:50.503 INFO 11234 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#5fa23965: startup date [Wed Jun 24 17:52:50 EDT 2015]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#5cced717
2015-06-24 17:52:51.723 WARN 11234 --- [ main] .i.s.PathMatchingResourcePatternResolver : Skipping [/var/folders/kq/ykvl3t4n3l71p7s9ymywb4ym0000gn/T/spring-boot-libs/06f98804e83cf4a94380b46591b976b1d17c36b8-eureka-client-1.1.147.jar] because it does not denote a directory
2015-06-24 17:52:53.662 INFO 11234 --- [ main] o.s.b.f.config.PropertiesFactoryBean : Loading properties file from URL [jar:file:/Users/larrymitchell/libertas/vipaas/applicationservices/bookmarkservice/target/bookmarkservice.jar!/lib/spring-integration-core-4.1.2.RELEASE.jar!/META-INF/spring.integration.default.properties]
Ok, after talking it over with another coworker I figured out what the actual issue is.
Part of the confusion is that I am using the spring cloud (https://github.com/VanRoy/spring-cloud-dashboard) which is a great front end by the way. So when the service starts we see it for to discovery and retrieve the correct configuration file and load it. After I go to the spring cloud console and see a setting of UP which means that it is discovered and registered though discovery. There is a second status indicator which is when the spring cloud dashboard takes the registered endpoint and get health. In my issue the endpoint was showing up as UNKNOWN.
If I then use the endpoint that shows up in the console and try the info actuator endpoint then the request times out. This was the nature of my problem
Ok, so what was the issue?
Basically since I defined the in the application.yml and since when the service registers in bootstrap it does not know the port yet and then it picks a default of 8080 (my assumption since that is what it does). The server port is set in application.yml to 9001 but discovery sees a registration of 8080 so the spring cloud console cannot access the localhost:8080/bookmarkservice/manage/health since there is no service at that endpoint (which is at 9001 actually). Other services also cannot find the service.
By moving the server.port to bootstrap.yml then the correct endpoint of rate service is registered and the service properly accessible.