We are using haproxy as our load balancer, We are also using splunk to display the UI version of the logs. Our requests has certain query parms which has sensitive data which we dont want to expose in splunk or haproxy logs.
Example request - **
https:localhost:8080/car/request?car_price=60000&car_unique_id=abcdefghijkl&car_color=green
**
This whole request placed in haproxy log as it is.
Apr 29 13:02:39 localhost haproxy[29119]: client_address = "::ffff:127.0.0.0", client_port = 00012, server_address = "12.12.12.12", server_port = 11001 path, = "/car/request?car_price=60000&car_unique_id=abcdefghijkl&car_color=green" , status = 200
This has sensitive information of car_unique_id in it and it's displayed as it is in splunk. I tried using log format in haproxy.cfg but it completely hides the path.
Is there any way to hide only the query parm in haproxy 2.4.x using config ?
I need the log to be recorded as
Apr 29 13:02:39 localhost haproxy[29119]: client_address = "::ffff:127.0.0.0", client_port = 00012, server_address = "12.12.12.12", server_port = 11001 path, = "/car/request?car_price=60000&car_color=green" , status = 200
Related
I'm trying to set up jupyterhub. The 8000 is used for a different program, so I have to use a different port.
I change the file /etc/jupyterhub/jupyterhub_config.py add/uncomments:
c.JupyterHub.hub_port = 9003
c.JupyterHub.ip = '111.111.11.1'
c.JupyterHub.port = 9002
c.ConfigurableHTTPProxy.api_url = 'http://127.0.0.1:9000'
when I tried to running jupyterhub, I got the error:
[W 2020-06-03 14:48:48.930 JupyterHub proxy:554] Stopped proxy at pid=47639
[W 2020-06-03 14:48:48.932 JupyterHub proxy:643] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...
[I 2020-06-03 14:48:48.932 JupyterHub proxy:646] Starting proxy # http://111.111.11.1:9002/
14:48:49.301 [ConfigProxy] info: Proxying http://111.111.11.1:9002 to (no default)
14:48:49.307 [ConfigProxy] info: Proxy API at http://127.0.0.1:9000/api/routes
14:48:49.315 [ConfigProxy] error: Uncaught Exception
[E 2020-06-03 14:48:49.437 JupyterHub app:2718]
Traceback (most recent call last):
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-packages/jupyterhub/app.py", line 2716, in launch_instance_async
await self.start()
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-packages/jupyterhub/app.py", line 2524, in start
await self.proxy.get_all_routes()
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-pack#c.JupyterHub.hub_ip = '127.0.0.1'
ages/jupyterhub/proxy.py", line 806, in get_all_routes
resp = await self.api_request('', client=client)
File "/home/user/miniconda/2020.02/python/3.7/lib/python3.7/site-packages/jupyterhub/proxy.py", line 774, in api_request
result = await client.fetch(req)
tornado.httpclient.HTTPClientError: HTTP 403: Forbidden
What is the correct way to install jupyterhub on a port other than 8000?
Thanks.
I think some of these parameters are now obsolete, so it may depend which version you are running, but I'll assume JupyterHub 1.0+.
There are a few different services that make up JupyterHub, and the 'hub' service, confusingly, as not actually the one you are concerned with. The proxy is the main entrypoint to the application, and it proxies traffic to the hub by default, and to specific user Jupyter servers if the traffic is to a /user/ URL.
In addition, the 'hub' service also has an API endpoint that user servers can access directly (this doesn't go through the proxy). And the proxy has an extra API endpoint too, for direct access from the hub...
It is the proxy service that defaults to port 8000. To change to 80, for example try this:
## The public facing URL of the whole JupyterHub application.
#
# This is the address on which the proxy will bind. Sets protocol, ip, base_url
c.JupyterHub.bind_url = 'https://0.0.0.0:80'
Need some help related to create a custom filter for custom app which is websocket server written in node.js . As per my understanding from other articles the custom node.js app needs to write a log which enters any authentication failed attempts which will further be read by Fail2ban to block IP in question . Now I need help with example for log which my app should create which can be read or scanned by fail2ban and also need example to add custom filter for fail2ban to read that log to block ip for brute force .
Its really old question but I found it in google so I will write answer.
The most important thing is that line you logging needs to have right timestamp because fail2ban uses it to ban and unban. If time in log file is different than system time, fail2ban will not find it so set right timezone and time in host system. In given example I used UTC time and time zone offset and everything is working. Fail2Ban recognizes different types of timestamps but I didn't found description. But in fail2ban manual you can find two examples. There also exist command to check if your line is recognized by written regular expression. I really recommend to use it. I recommend also to use "regular expression tester". For example this one.
Rest of the log line is not really important. You just need to pass user ip.
This are most important informations but I will write also example. Im just learning so I did it for educational purposes and Im not sure if given example will have sense but it works. I used nginx, fail2ban, pm2, and node.js with express working on Debian 10 to ban empty/bad post requests based on google recaptcha. So set right time in Your system:
For debian 10 worked:
timedatectl list-timezones
sudo timedatectl set-timezone your_time_zone
timedatectl <-to check time
First of all You need to pass real user ip in nginx. This helped me so You need to add line in You server block.
sudo nano /etc/nginx/sites-available/example.com.
Find location and add this line:
location / {
...
proxy_set_header X-Forwarded-For $remote_addr;
...
}
More about reverse proxy.
Now in node.js app just add
app.set('trust proxy', true)
and you can get user ip now using:
req.ip
Making it work with recaptcha:
All about recaptcha is here: Google Developers
When You get user response token then you need to send post request to google to verify it. I did it using axios. This is how to send post request. Secret is your secret, response is user response.
const axios = require('axios');
axios
.post(`https://www.google.com/recaptcha/api/siteverify?secret=${secret}&response=${response}`, {}, {
headers: {
"Content-Type": "application/x-www-form-urlencoded; charset=utf-8"
},
})
.then(async function (tokenres) {
const {
success, //gives true or false value
challenge_ts,
hostname
} = tokenres.data;
if (success) {
//Do something
} else {
//For fail2ban. You need to make correct timestamp.
//Maybe its easier way to get this but on my level of experience
//I did it like this:
const now = new Date();
const tZOffset = now.getTimezoneOffset()/60;
const month = now.toLocaleString('en-US', { month: 'short' });
const day = now.getUTCDate();
const hours = now.getUTCHours()-tZOffset;
const minutes = now.getUTCMinutes();
const seconds = now.getUTCSeconds();
console.log(`${month} ${day} ${hours}:${minutes}:${seconds} Captcha verification failed [${req.ip}]`);
res.send(//something)
}
Time zone offset to set right time. Now pm2 save console.log instructions in log file in /home/youruserdir/.pm2/logs/yourappname-out.log
Make empty post request now. Example line of bad request will look like this:
Oct 14 19:5:3 Captcha verification failed [IP ADRESS]
Now I noticed that minutes and seconds have no 0 but fail2ban still recognizes them so its no problem. BUT CHECK IF DATE AND TIME PASSES WITH YOUR SYSTEM TIME.
Now make filter file for fail2ban:
sudo nano /etc/fail2ban/filter.d/your-filter.conf
paste:
[Definition]
failregex = Captcha verification failed \[<HOST>\]
ignoreregex =
Now ctrl+o, ctrl+x and you can check if fail2ban will recognize error lines using fail2ban-regex command:
fail2ban-regex /home/youruserdir/.pm2/logs/yourappname-out.log /etc/fail2ban/filter.d/your-filter.conf
Result will be:
Failregex: 38 total
|- #) [# of hits] regular expression
| 1) [38] Captcha verification failed \[<HOST>\]
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [38] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 42 lines, 0 ignored, 38 matched, 4 missed
[processed in 0.04 sec]
As You can see 38 matched. You will have one. If You have no matches, check pm2 log file. When I was testing on localhost my app gave IP address with ::127.0.0.1. It can be ipv6 related. It can maybe make make a problem.
Next:
sudo nano /etc/fail2ban/jail.local
Add following block:
[Your-Jail-Name]
enabled = true
filter = your-filter
logpath = /home/YOURUSERDIR/.pm2/logs/YOUR-APP-NAME-out.log
maxretry = 5
findtime = 10m
bantime = 10m
So now. Be sure that you wrote filter name without .conf extension.
In logpath be sure to write right user dir and log name. If You will get 5(maxrety) wrong post requests in 10minutes(finditme) then user will be banned for 10 minutes. You can change this values.
Now just restart nginx and fail2ban:
sudo systemctl restart nginx
sudo systemctl restart fail2ban
After You can check if Your jail is working using commands:
sudo fail2ban-client status YOUR-JAIL-NAME
There will be written how much matches was found and how much ips are banned. More information You can find in fail2ban log file.
cat /var/log/fail2ban.log
Found IPADDR - 2021-10-13 13:12:57
NOTICE [YOUR-JAIL-NAME] Ban IPADDRES
I wrote this step-by-step because probably only people with little experience will look for this. If You see mistakes or you can suggest me something then just comment.
I have configured uWSGI to serve my Django app on a unix socket, and Nginx as a proxy to this socket. The server is running CentOS 7. I think I have configured Nginx so that it has permission to read and write to uWSGI's socket, but I'm still getting a permission denied error. Why can't Nginx access the uWSGI socket on CentOS 7?
[uwsgi]
socket=/socket/uwsgi.sock
virtualenv=/home/site/virtsite/
chdir=/home/site/wsgitest/
module=wsgitest.wsgi:application
vhost = true
master=True
workers=8
chmod-socket=666
pidfile=/home/site/wsgitest/uwsgi-master.pid
max-requests=5000
chown-socket=nginx:nginx
uid = nginx
gid = nginx
listen.owner = nginx
listen.group = nginx
server {
listen 80;
location / {
uwsgi_pass unix:///home/site/wsgitest/uwsgi.sock;
include uwsgi_params;
}
}
uwsgi --ini uwsgi.ini (as root)
ls -l /home/site/wsgitest/uwsgi.sock
srwxrwxrwx. 1 nginx nginx 0 Oct 13 10:05 uwsgi.sock
2014/10/12 19:01:44 [crit] 19365#0: *10 connect() to unix:///socket/uwsgi.sock failed (13: Permission denied) while connecting to upstream, client: 2.191.102.217, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://unix:///socket/uwsgi.sock:", host: "179.227.126.222"
The Nginx and uWSGI configurations are correct. The problem is that SELinux denied Nginx access to the socket. This results in a generic access denied error in Nginx's log. The important messages are actually in SELinux's audit log.
# show the new rules to be generated
grep nginx /var/log/audit/audit.log | audit2allow
# show the full rules to be applied
grep nginx /var/log/audit/audit.log | audit2allow -m nginx
# generate the rules to be applied
grep nginx /var/log/audit/audit.log | audit2allow -M nginx
# apply the rules
semodule -i nginx.pp
You may need to generate the rules multiple times, trying to access the site after each pass, since the first SELinux error might not be the only one that can be generated. Always inspect the policy that audit2allow suggests creating.
These steps were taken from this blog post which contains more details about how to investigate and what output you'll get.
Configure your uwsgi.ini with uid and gid user.
#uwsgi.ini
uid = nginx
gid = nginx
Regards,
I wished I could comment :(
Everything looks fine from here except unix socket path
unix:///socket/uwsgi.sock failed (2: No such file or directory)
Docs says it has just one slash
uwsgi_pass unix:/tmp/uwsgi.socket;
I'm Using Sphinx 2.0.5-release On both Server .
Both Server have same indexers.I have Searchd running on both server . But I would like
to fetch Data of Server 1 from Server 2.
I used this particular code :
$cl = new SphinxClient;
$cl->SetServer(remote_sphinx_server,9312); (remote_sphinx_server : IP address of 2nd
Server)
$cl->SetMatchMode(SPH_MATCH_EXTENDED);
$result = $cl->Query("","$indexer");
But I Don't get any response .
Im getting error : connection to "Server 2 IP:9312" failed (errno=113, msg=No route to
host)
If i Use below code :
$cl = new SphinxClient;
$cl->SetMatchMode(SPH_MATCH_EXTENDED);
$result = $cl->Query("","$indexer");
I get proper response. As the data is coming from local Sphinx .
What can be the problem fetching data from remote Server ? Any Help is very much
appreciated .
Thank you
you may have multiple network interfaces on server 2 and you are using one the IPs that is not reachable by server 1
check if firewall allows communication on port 9312
check if searchd does run on server 2. Also, by default, searchd opens the port on all available interfaces, unless specified. Check searchd.log if reports any error about opening the port(s).
I have a server configured with varnish on port 80 -> apache2 on port 81
if i go to a folder without a trailing slash ( http://xxx/test ) it tries to redirect to ( http://xxx:81/test/ )
is it possible to stop apache from doing that ? it breaks every single thing from phpmyadmin to svn....
maybe a rule in varnish to drop the :81 from the redirect ? but im not good in configuring advanced varnish..
sub vcl_deliver {
if(resp.http.Location){
set resp.http.Location = regsub(resp.http.Location, ":81/","/");
}
}
(eventually check if the request host matches the response Location: host )