Haproxy multi-line config - haproxy

Is it possible to split configuration arguments (in haproxy.cfg) onto multiple lines?
Example
Current
frontend
https-in bind :443 ssl strict-sni crt </path/to/cert1.pem> crt </path/to/cert2.pem> crt </path/to/cert3.pem> ...
Ideal
frontend
https-in bind :443 ssl strict-sni
crt </path/to/cert1.pem>
crt </path/to/cert2.pem>
crt </path/to/cert3.pem>
...
Error when trying ideal
$ /usr/sbin/haproxy -c -V -f /etc/haproxy/haproxy.cfg
[ALERT] 343/210133 (25646) : parsing [/etc/haproxy/haproxy.cfg:45] : unknown keyword 'crt' in 'frontend' section
[ALERT] 343/210133 (25646) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 343/210133 (25646) : Fatal errors found in configuration.

You can't do multiline syntax in the haproxy.cfg.
Take a look at the file format documentation: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#2.1
Update:
Thanks to the comment from Venky I see that there is also the option to use crt-list which does provide an option for multi line pem file references. https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.1-crt-list

Related

How to set host-header depending on choosen backend when using HAProxy as loadbalancer

Under certain circumstances it is required to modify the host-header based on the backend selected by HAProxy loadbalancing (an example is described here: https://www.claudiokuenzler.com/blog/919/haproxy-how-use-different-http-host-header-for-each-backend-server)
My internet research shows that there are basically two ways to achieve this:
a) use "http-send-name-header Host" (not recommended according the HAProxy docs: https://www.haproxy.com/documentation/hapee/latest/onepage/#4.2-http-send-name-header)
b) use " http-request set-header Host ... if { srv_id 1 }"
(the following article describes these two techniques: https://serverfault.com/questions/876871/configure-haproxy-to-include-host-headers-for-different-backends)
Option a) works as expected but it is discouraged to be use (see https://www.haproxy.com/documentation/hapee/latest/onepage/#4.2-http-send-name-header).
For this reason I am trying to option b). Unfortunately the config cowardly refuses to work:
backend loadbalanced-backends
mode http
balance roundrobin
option forwardfor
http-request set-header Host one.domain.com if { srv_id 1 }
http-request set-header Host two.domain.com if { srv_id 2 }
server one one.domain.com:8000
server two two.domain.com:8000
The warnings printed in the log:
[WARNING] (1) : config : parsing ....cfg:14] : anonymous acl will never match because it uses keyword 'srv_id' which is incompatible with 'backend http-request header rule'
[WARNING] (1) : config : parsing ....cfg:15] : anonymous acl will never match because it uses keyword 'srv_id' which is incompatible with 'backend http-request header rule'
Besids the warning the logged request-header show that the host is not changed to one.domain.com resp. two.domain.com.
What exactly means the warning "anonymous acl will never match..."? I do not see what is wrong with the configuration. Any ideas are welcome.

Kibana is not running on FreeBSD

I'm fighting with kibana since few days and I don't overcome to start it on my FreeBSD server.
This is my environment:
FreeBSD 11.1-STABLE
ElasticSearch 5.3.0
Kibana 5.3.0
Logstash 5..
ElasticSearch and Logstash work fine. But I don't overcome to start kibana service.
This is files according to kibana:
kibana.yml file:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are
both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
server.basePath: "/qual/kibana"
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana.log
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
/usr/local/etc/rc.d/kibana:
#!/bin/sh
#
# $FreeBSD: head/textproc/kibana5/files/kibana.in 462830 2018-02-24 14:17:41Z feld $
#
# PROVIDE: kibana
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name=kibana
rcvar=kibana_enable
load_rc_config $name
: ${kibana_enable:="NO"}
: ${kibana_config:="/usr/local/etc/kibana.yml"}
: ${kibana_user:="www"}
: ${kibana_group:="www"}
: ${kibana_log:="/var/log/kibana.log"}
required_files="${kibana_config}"
pidfile="/var/run/${name}/${name}.pid"
start_precmd="kibana_precmd"
procname="/usr/local/bin/node"
command="/usr/sbin/daemon"
command_args="-f -p ${pidfile} env BABEL_DISABLE_CACHE=1 ${procname} /usr/local/www/kibana5/src/cli serve --config ${kibana_config} --log-file ${kibana_log}"
kibana_precmd()
{
if [ ! -d $(dirname ${pidfile}) ]; then
install -d -o ${kibana_user} -g ${kibana_group} $(dirname ${pidfile})
fi
if [ ! -f ${kibana_log} ]; then
install -o ${kibana_user} -g ${kibana_group} -m 640 /dev/null ${kibana_log}
fi
if [ ! -d /usr/local/www/kibana5/optimize ]; then
install -d -o ${kibana_user} -g ${kibana_group} /usr/local/www/kibana5/optimize
fi
}
run_rc_command "$1"
/etc/rc.conf:
kibana_enable="YES"
But when I execute: service kibana start
I get:
root#server:/var/log # service kibana start
Starting kibana.
root#server:/var/log # service kibana status
kibana is not running.
I don't know why ?
Start the service in debug mode
sh -x /usr/local/etc/rc.d/kibana start
find which command is used to start the kibana service. For kibana, the command should be something like /usr/local/bin/node /usr/local/www/kibana6/src/cli serve --config /usr/local/etc/kibana/kibana.yml
Start the process in foreground
It is possible that node is not properly installed or some permission issue.

HAProxy frontend rule matching order

I have a haproxy configuration as follows. (haproxy 1.7) We want to catch all OPTIONS request and respond directly to them instead of routing the requests to backends (which have basic auth enabled).
This was working fine when we developed it but now it seems to not be matching the rules in order (not sure what we have/haven't done which has caused this):
global
log 127.0.0.1 local1
tune.ssl.default-dh-param 2048
lua-load /etc/haproxy/cors.lua
stats socket /var/run/haproxy.sock mode 400
# Default certificate and key directories
ca-base /etc/ssl/private
crt-base /etc/ssl/private
# User lists used to enforce HTTP Basic Authentication
userlist ul_100123-2ovt9rsu
user app1 password $6$lCjf6VnWhI$kcjmpWdV.odeYf4psUhcVKs49ZtPk3MDhg5wtLNUx658A3EWdDHJQqs9xCD1d.7zG05M2nwOxdkC6o/MSpifv0
userlist ul_100123-9uvsclqr
user app1 password $6$DlcLoDMMu$wDm3O0W1eiQuk8gI.GmpzI1.jbBf.UYQ.KM73nHa1tGZJNfzkDpVnLUhh7v7C9yPHB1oo0cRrFnfOdeyAf/eU1
# Front-end for public services which have SSL termination at the router.
frontend term
bind *:443 accept-proxy ssl no-sslv3 crt router/fred-external.pem crt router/fred-external.ace.pem crt router
reqadd X-Forwarded-Proto:\ https
rspidel ^(Server|X-Powered-By):
option forwardfor
mode http
http-request use-service lua.cors-response if METH_OPTIONS { req.hdr(origin) -m found }
acl host_match_100123-2ovt9rsu ssl_fc_sni -i 2ovt9rsu.fredurl.com
use_backend b_term_100123-2ovt9rsu if host_match_100123-2ovt9rsu
......
If I curl -X OPTIONS to 2ovt9rsu.fredurl.com it matches the 2nd rule and forwards me to the b_term_100123-2ovt9rsu backend which then fails as I haven't provided auth creds.
If I curl -X OPTIONS to Anything.fredurl.com it matches the first http-request and responds with the cors response as expected.
Why does the 2ovt9rsu.fredurl.com not match the first http-request rule and then return the cors-response?
In the logs we can see
Nov 7 18:24:09 localhost haproxy[37302]: 94.45.23.22:49853 [07/Nov/2017:18:24:09.807] term~ b_term_100123-2ovt9rsu/<lua.cors-response> -1/-1/-1/-1/73 401 249 - - PR-- 0/0/0/0/3 0/0 "OPTIONS / HTTP/1.1"
when the request gets forwarded to the backend
http-request gets executed before use_backend, the config looks good to me, have you set origin header when you curl ?

haproxy sni ssl_fc_has_sni always 0

I am trying to create an SNI based frontend/backend setup in HAProxy. It seems that ssl_fc_has_sni is always evaluating to 0 in my log and I haven't been able to figure out why.
This is a simplified version of the config I've been testing with:
global
user haproxy
group haproxy
daemon
log /dev/log local0
defaults
timeout connect 5s
timeout client 30s
timeout server 30s
timeout tunnel 1h
log-format frontend:%f\ %b/%s\ client_ip:%Ci\ client_port:%Cp\ SSL_version:%sslv\ SSL_cypher:%sslc\ SNI:%[ssl_fc_has_sni]\ %ts
frontend public_ssl
bind :443
log global
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend be_sni if { ssl_fc_has_sni }
default_backend be_no_sni
backend be_sni
server fe_sni 127.0.0.1:10444 weight 1 send-proxy
frontend fe_sni
#terminate with a cert that matches the sni host
bind 127.0.0.1:10444 ssl crt /mycertdir/certs accept-proxy no-sslv3
default_backend be_default
frontend fe_no_sni
#terminate with a generic cert
bind 127.0.0.1:10443 ssl crt /myothercertdir/default_pub_keys.pem accept-proxy no-sslv3
default_backend be_default
# backend for when sni does not exist, or ssl term needs to happen on the edge
backend be_no_sni
server fe_no_sni 127.0.0.1:10443 weight 1 send-proxy
backend be_default
mode http
option forwardfor
option http-pretend-keepalive
server the_backend 127.0.0.1:8080
Other items of note:
haproxy -vv shows OpenSSL library supports SNI : yes
I am running haproxy version 1.5.9 on fedora 20 through vagrant
the log always shows SNI:0 haproxy[17807]: frontend:public_ssl be_no_sni/fe_no_sni client_ip:<ip> client_port:42285 SSL_version:- SSL_cypher:- SNI:0 --
I'm testing with openssl s_client -servername www.example.com -connect <ip>:443.
I feel like I'm missing something obvious since there is no ssl version, cypher, or sni.
Looks like ssl_fc_has_sni is meant to be used post termination. Checking for the existence of the SNI host can be accomplished with:
frontend public_ssl
bind :443
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend be_sni if { req.ssl_sni -m found }
default_backend be_no_sni

How to use non-standard ports?

I'm trying to get moov running on port 8080 but am getting the error:
$ curl -s -i http://mlocal.nytimes.com:8080/
HTTP/1.0 534 Internal Server Error
Connection: close
Content-Type: text/plain;
Content-Length: 69
Host header 'mlocal.nytimes.com:8080' did not match project rewriters
I'm starting the server with:
$ sudo moov server -p=8080 --auto-hosts
(It appears to work fine on port 80.)
There's an additional step you need to take when you manually specify a port to run on.
Go into the project files and open config.json
Append :8080 to the domain names specified like this:
"$.nytimes.com:8080 => www.nytimes.com",
etc..