I'm using HaProxy as a reverse proxy and load balancer for three servers. Each of these servers uses Basic Authentication to control access to them and HaProxy shares the load across them via a round-robin method.
I'd like to log the IP address, user agent, request URL and Basic Authentication username for each request that HaProxy handles and to log this to a custom file so that another script can check periodically to ensure that credentials are not be shared by my users.
It looks like this possible to do but I cannot work out how to do it.
Here's what I've added to my haproxy.cfg file in my frontend section:
# Log name of server
capture request header Host len 500
# Capture request user agent
capture request header User-Agent len 64
# Capture authorization details
capture request header Authorization len 64
log-format "%ci:%cp [%t] %H %HP %hr %hrl"
When I include this in my haproxy.cfg file I and restart the service HaProxy fails to start. Looking in 'systemctl status haproxy.service' results in:
haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled)
Active: failed (Result: start-limit) since Mon 2018-06-18 13:09:04 BST; 18s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 25529 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, status=0/SUCCESS)
Process: 25527 ExecReload=/usr/sbin/haproxy -c -f ${CONFIG} (code=exited, status=0/SUCCESS)
Process: 9883 ExecStart=/usr/sbin/haproxy-systemd-wrapper -f ${CONFIG} -p /run/haproxy.pid $EXTRAOPTS (code=exited, status=0/SUCCESS)
Process: 10644 ExecStartPre=/usr/sbin/haproxy -f ${CONFIG} -c -q (code=exited, status=1/FAILURE)
Main PID: 9883 (code=exited, status=0/SUCCESS)
Jun 18 13:09:04 host systemd[1]: Failed to start HAProxy Load Balancer.
Jun 18 13:09:04 host systemd[1]: Unit haproxy.service entered failed state.
Jun 18 13:09:04 host systemd[1]: haproxy.service holdoff time over, scheduling restart.
Jun 18 13:09:04 host systemd[1]: Stopping HAProxy Load Balancer...
Jun 18 13:09:04 host systemd[1]: Starting HAProxy Load Balancer...
Jun 18 13:09:04 host systemd[1]: haproxy.service start request repeated too quickly, refusing to start.
Jun 18 13:09:04 host systemd[1]: Failed to start HAProxy Load Balancer.
Jun 18 13:09:04 host systemd[1]: Unit haproxy.service entered failed state.
What am I doing wrong ?
I'm new to HAproxy. I have turned on basicauth on backend server and I wanted to see in HAprogxy log who is making requests.
I have done it like this:on backend part, I have set varible txn.user:
http-request set-var(txn.user) http_auth_group(myuserlist)
Of course you need to have myuserlist, but this is easy.
In fronted I have added this variable (%[var(txn.user)]) to my custom log:
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r [%[var(txn.user)]]"
I can now see IP address, user agent, request URL and HAProxy basic authentication username.
Just for fun (since I'm using basic http server busybox, 20k in size):
on backend I have added:
http-response set-header Set-Cookie user=%[var(txn.user)];path=/;SameSite=strict;Secure
and inside html I read this cookie with javascript and show on page who is loged in :)
Haproxy only knows how to log to a syslog socket / daemon.
You will need to tag (with the log-tag directive, probably on a backend) your logs, and configure your syslog daemon to log to a file the entries matching the previsouly defined tag.
Related
I've been working on trying to integrate onlyoffice with my working nextcloud app. I've attempted this many times and in many different ways. I believe I understand "most" of the mistakes I've made previously...but there always seems to be one more.
Nextcloud is running on a Ubuntu 22.04 VM. I have another VM operating nginx reverse proxy for this and the other apps I want to expose to the outside. I've decided on this attempt to also run onlyoffice on the nextcloud server but using different ports from that app.
Using the modified template suggested for SSL enabled nginx, when I attempt to start the service, I get:
bonzo#cloud:/etc/nginx/conf.d$ systemctl status nginx.service
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-08-05 15:12:02 CDT; 8s ago
Docs: man:nginx(8)
Process: 7990 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
CPU: 26ms
Aug 05 15:12:02 cloud systemd[1]: Starting A high performance web server and a reverse proxy
server...
Aug 05 15:12:02 cloud nginx[7990]: nginx: [emerg] host not found in upstream "docservice" in
/etc/nginx/includes/ds-docservice.conf:74
Aug 05 15:12:02 cloud nginx[7990]: nginx: configuration file /etc/nginx/nginx.conf test failed
Aug 05 15:12:02 cloud systemd[1]: nginx.service: Control process exited, code=exited,
status=1/FAILURE
Aug 05 15:12:02 cloud systemd[1]: nginx.service: Failed with result 'exit-code'.
Aug 05 15:12:02 cloud systemd[1]: Failed to start A high performance web server and a reverse
proxy server.*
When I looked at the line 74 in ds-docservice.conf, it shows this:
location / {
proxy_pass http://docservice;
}
And, I'm not exactly sure what it is expecting to see there. I haven't included it in this first post but I'd be happy to share my ds.conf or any other logs/configs that would be required for help with this. I also realize this is probably going to be something silly that I've missed or messed up, I'm still new to nginx and integrating onlyoffice has been a lot more difficult than you'd think, from the literature and yt vids I've seen!
Thanks for any help
We are facing an issue to start the haproxy we installed.
We are using ubuntu 16.04 and the version installed is:
HA-Proxy version 1.6.3 2015/12/25
Copyright 2000-2015 Willy Tarreau
The folder /run/haproxy is created
Everything is dowloaded correctly.
Its uninstalled, and then installed with the same errors, so we desperatly seek help.
This is the folder /etc/default/haproxy
# Defaults file for HAProxy
#
# This is sourced by both, the initscript and the systemd unit file, so do not
# treat it as a shell script fragment.
# Change the config file location if needed
#CONFIG="/etc/haproxy/haproxy.cfg"
ENABLED=1
# Add extra flags here, see haproxy(1) for a few options
#EXTRAOPTS="-de -m 16"
Here is /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DE$
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend myfrontend
bind *:80
mode http
default_backend mybackend
backend mybackend
mode http
balance roundrobin
option httpchkHEAD / HTTP/1.1\r\nHost:\localhost
server web1 10.10.2.110:80 check weight 10
server web2 10.10.2.111:80 check weight 20
server web3 10.10.2.112:80 check weight 30
Here is the error message:
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Thu 2018-04-19 14:35:21 UTC; 5s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 29395 ExecStartPre=/usr/sbin/haproxy -f ${CONFIG} -c -q (code=exited, status=1/FAILURE)
Main PID: 28467 (code=exited, status=0/SUCCESS)
Apr 19 14:35:20 dats42-lb systemd[1]: Failed to start HAProxy Load Balancer.
Apr 19 14:35:20 dats42-lb systemd[1]: haproxy.service: Unit entered failed state.
Apr 19 14:35:20 dats42-lb systemd[1]: haproxy.service: Failed with result 'exit-code'.
Apr 19 14:35:21 dats42-lb systemd[1]: haproxy.service: Service hold-off time over, scheduling restart.
Apr 19 14:35:21 dats42-lb systemd[1]: Stopped HAProxy Load Balancer.
Apr 19 14:35:21 dats42-lb systemd[1]: haproxy.service: Start request repeated too quickly.
Apr 19 14:35:21 dats42-lb systemd[1]: Failed to start HAProxy Load Balancer.
Anyone who can help? :)
in your /etc/haproxy/haproxy.cfg file under the global section, there is this entry - stats socket /run/haproxy/admin.sock mode 660 level admin
check if admin.sock file is getting created. also check if there is the directory path existing for that file to be created.
im try to install phpmyadmin on centos 7 with digitalocean droplet.i edited allow IP to dynamic any IP.but when i try to restart the service,i got this message.
[root#centos-512mb-nyc2-01 /]# sudo systemctl restart httpd.service
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
here is the result after run systemctl status httpd.service
[root#centos /]# systemctl status httpd.service
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2016-04-26 04:47:31 EDT; 1min 50s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 2633 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE)
Process: 2632 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
Main PID: 2632 (code=exited, status=1/FAILURE)
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: Starting The Apache HTTP Server...
Apr 26 04:47:31 centos-512mb-nyc2-01 httpd[2632]: AH00526: Syntax error on line 1 of /etc/httpd/conf.d/phpMyAdmin.conf:
Apr 26 04:47:31 centos-512mb-nyc2-01 httpd[2632]: allow not allowed here
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILURE
Apr 26 04:47:31 centos-512mb-nyc2-01 kill[2633]: kill: cannot find process ""
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: httpd.service: control process exited, code=exited status=1
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: Failed to start The Apache HTTP Server.
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: Unit httpd.service entered failed state.
Apr 26 04:47:31 centos-512mb-nyc2-01 systemd[1]: httpd.service failed.
here is my http file
Allow from# phpMyAdmin - Web based MySQL browser written in php
#
# Allows only localhost by default
#
# But allowing phpMyAdmin to anyone other than localhost should be considered
# dangerous unless properly secured by SSL
Alias /phpMyAdmin /usr/share/phpMyAdmin
Alias /phpmyadmin /usr/share/phpMyAdmin
<Directory /usr/share/phpMyAdmin/>
AddDefaultCharset UTF-8
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAny>
#Require ip 127.0.0.1
Require all granted
#Require ip ::1
</RequireAny>
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1
</IfModule>
</Directory>
<Directory /usr/share/phpMyAdmin/setup/>
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAny>
Why don't you use the one click Application Image that Digital Ocean offers?
You can get the full tutorial here
I am trying to change the default port of openldap (not so experienced with openldap so I might be doing something incorrectly).
Currently I am installing it through yum package manager on CentOS 7.1.1503 as follows :
yum install openldap-servers
After installing 'openldap-servers' I can start the openldap server by invoking service slapd start
however when I try to change the port by editing /etc/sysconfig/slapd for instance by changing SLAPD_URLS to the following :
# OpenLDAP server configuration
# see 'man slapd' for additional information
# Where the server will run (-h option)
# - ldapi:/// is required for on-the-fly configuration using client tools
# (use SASL with EXTERNAL mechanism for authentication)
# - default: ldapi:/// ldap:///
# - example: ldapi:/// ldap://127.0.0.1/ ldap://10.0.0.1:1389/ ldaps:///
SLAPD_URLS="ldapi:/// ldap://127.0.0.1:3421/"
# Any custom options
#SLAPD_OPTIONS=""
# Keytab location for GSSAPI Kerberos authentication
#KRB5_KTNAME="FILE:/etc/openldap/ldap.keytab"
(see SLAPD_URLS="ldapi:/// ldap://127.0.0.1:3421/" )..
it is failing to start
service slapd start
Redirecting to /bin/systemctl start slapd.service
Job for slapd.service failed. See 'systemctl status slapd.service' and 'journalctl -xn' for details.
service slapd status
Redirecting to /bin/systemctl status slapd.service
slapd.service - OpenLDAP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/slapd.service; disabled)
Active: failed (Result: exit-code) since Fri 2015-07-31 07:49:06 EDT; 10s ago
Docs: man:slapd
man:slapd-config
man:slapd-hdb
man:slapd-mdb
file:///usr/share/doc/openldap-servers/guide.html
Process: 41704 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=1/FAILURE)
Process: 41675 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)
Main PID: 34363 (code=exited, status=0/SUCCESS)
Jul 31 07:49:06 osboxes runuser[41691]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41693]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41695]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41697]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41699]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41701]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes slapd[41704]: #(#) $OpenLDAP: slapd 2.4.39 (Mar 6 2015 04:35:49) $
mockbuild#worker1.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.39/openldap-2.4.39/servers/slapd
Jul 31 07:49:06 osboxes systemd[1]: slapd.service: control process exited, code=exited status=1
Jul 31 07:49:06 osboxes systemd[1]: Failed to start OpenLDAP Server Daemon.
Jul 31 07:49:06 osboxes systemd[1]: Unit slapd.service entered failed state.
ps I also disabled firewalld
the solution was provided when I ran journalctl -xn which basically says:
SELinux is preventing /usr/sbin/slapd from name_bind access on the tcp_socket port 9312.
***** Plugin bind_ports (92.2 confidence) suggests ************************
If you want to allow /usr/sbin/slapd to bind to network port 9312
Then you need to modify the port type.
Do
# semanage port -a -t ldap_port_t -p tcp 9312
***** Plugin catchall_boolean (7.83 confidence) suggests ******************
If you want to allow nis to enabled
Then you must tell SELinux about this by enabling the 'nis_enabled' boolean.
You can read 'None' man page for more details.
Do
setsebool -P nis_enabled 1
***** Plugin catchall (1.41 confidence) suggests **************************
If you believe that slapd should be allowed name_bind access on the port 9312 tcp_socket by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep slapd /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp
Newcomer to postgres here!
I edited pg_hba.conf as mentioned here , but when I try to restart postgresql service, the attempt fails. Below is the command line output with all the information I could gather.
[root#arunpc modules]# service postgresql restart
Redirecting to /bin/systemctl restart postgresql.service
Job failed. See system logs and 'systemctl status' for details.
[root#arunpc modules]# systemctl status postgresql.service
postgresql.service - PostgreSQL database server
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled)
Active: failed since Sun, 08 Apr 2012 21:29:06 +0530; 14s ago
Process: 12228 ExecStop=/usr/bin/pg_ctl stop -D ${PGDATA} -s -m fast (code=exited, status=0/SUCCESS)
Process: 12677 ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o -p ${PGPORT} -w -t 300 (code=exited, status=1/FAILURE)
Process: 12672 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 12184 (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/postgresql.service
[root#arunpc modules]# tail /var/log/messages
....
Apr 8 21:29:06 arunpc systemd[1]: postgresql.service: control process exited, code=exited status=1
Apr 8 21:29:06 arunpc systemd[1]: Unit postgresql.service entered failed state.
Apr 8 21:29:06 arunpc pg_ctl[12677]: pg_ctl: could not start server
Apr 8 21:29:06 arunpc pg_ctl[12677]: Examine the log output.
FWIW, here is the configuration file (pg_hba.conf) used:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all postgres ident sameuser
local all all ident sameuser
# IPv4 local connections:
host all all 127.0.0.1 password
# IPv6 local connections:
host all all ::1 password
What could be the error here? It used to work fine before I made the edit (and since this was a development machine, I brilliantly didn't make any backup).
I would also like to get a more detailed log output. The log message in /var/log/messages file does ask me to "Examine the log output" - which log output would this be? What other troubleshooting steps can I take?
Many thanks in advance!
Depending on your startup script, it might redirect the postmaster's output to a file. This is usually server.log in the PGDATA directory. Things I'd try:
Comment out everything in pg_hba.conf and retry. If the problem is a syntax error in that file, then commenting out the offending line will allow the server to start and then you'll be able to uncomment one at a time until you find the error.
Start postmaster directly from the shell without sending it to the background. Just run postmaster -D <pgdata dir> and it should spew some more helpful logs.