Using HAproxy 1.8 I am trying to create an ACL which should dynamically match a given part of the url/path to a given header.
The portal in front of the HAproxy adds header for auth users:
X-roles MQ-QUEUE(QUEUE=test.queue,QUEUE=foobar.queue)
The accessed URL looks like:
https://portal/mqsrv/some/custom/path/with/queue/test.queue/in/path
my configuration so far:
frontend main
...
acl src_portal src 192.168.5.0/24
acl url_mqsrv path_beg -i /mqsrv
# working static approach
acl perm_mq req.fhdr(X-roles) -m str MQ-QUEUE(QUEUE=test.queue) if { path -m /test.queue/ }
# how to achieve this dynamically?
#
...
use_backend backend_mqsrv if src_portal url_mqsrv perm_mq
I tried to get the part of the path via regex into a variable like:
http-request set-var(txn.requested_queue) path,reg(queue\/(\K.*)\/ \1)
That does not work, because 'reg' is an unknown converter. Other attempts were using the regex to get the queue from the path for a match against the role header but I can not find a working way how to extract the queue part from a request path to use it for the role-header match. Another idea would be to use a lua script but that would not be as performant as an acl-match I guess.
I could not find a way to do that without using a lua script, so I solved it this way:
/etc/haproxy/haproxy.cfg:
global
...
# load loa script to check MQ-QUEUE() role
lua-load /etc/haproxy/check-roles.lua
frontend main
bind hap-rip:80
capture request header X-roles len 50
http-request set-var(txn.roles) req.fhdr(X-roles)
http-request set-var(txn.request_urlpath) path
acl src_portal src 192.168.5.0/24
acl url_mqsrv path_beg -i /mqsrv
# if acl matches, use backend_mqsrv
use_backend backend_mqsrv if src_portal url_mqsrv { lua.queue_allowed(txn.roles,txn.request_urlpath) -m bool true }
# if no acl matches the request, use default backend which serves a 403 forbidden response
default_backend backend_no-match
backend backend_mqsrv
log 127.0.0.1 local2
balance leastconn
acl src_portal src 192.168.5.0/24
http-request deny unless src_portal
# remove /mqsrv/ from request url
reqrep ^([^\ :]*)\ /mqsrv/(.*) \1\ /\2
# echo -n "username:password" | base64
reqadd Authorization:\ Basic\ *********************=
server mqsrv mqsrv.domain.local:443 ssl check-ssl check verify none
backend backend_no-match
# tcp-request content reject
mode http
http-request deny deny_status 403
/etc/haproxy/check-roles.lua:
-- example url https://portal/mqsrv/some/custom/path/with/queue/test.queue/in/path
-- example role header from portal: X-roles: MQ-QUEUE(QUEUE=ABC,QUEUE=test.queue,QUEUE=XYZ)
-- notes on haproxy lua logging: https://stackoverflow.com/questions/65879666/haproxy-lua-logging
-- https://www.codegrepper.com/code-examples/lua/lua+split+string+by+delimiter
function Split(s, delimiter)
result = {};
for match in (s..delimiter):gmatch("(.-)"..delimiter) do
table.insert(result, match);
end
return result;
end
-- function to check if requested queue (from url path) is contained in roles-header (injected from portal)
-- https://www.haproxy.com/blog/5-ways-to-extend-haproxy-with-lua/
core.register_fetches("queue_allowed", function(txn, var1, var2)
-- get role and path values from request
local roles_authorized = txn:get_var(var1)
core.log(core.info, "roles header: " .. roles_authorized)
-- extract requested queue from url/path
local request_urlpath = txn:get_var(var2)
-- local requested_queue = Split(request_urlpath,"/")[10] -- get requested queue by position
local requested_queue = request_urlpath:match("queue%/(.*)%/") -- get requested queue by matching string after 'queue/'
core.log(core.info, "requested_queue: " .. requested_queue)
-- extract QUEUES from MQ-QUEUE() header
queues = Split(Split(roles_authorized,'%(')[2],'%)')[1]
core.log(core.info, "queues: " .. queues)
-- loop through comma seperated queues to check if requested queue is found
for _,item in pairs(Split(queues,',')) do
queue = Split(item,'=')[2]
core.log(core.info, "authorized queue: " .. queue)
if queue == requested_queue then
core.log(core.info, "requested queue " .. requested_queue .. " matched with authorized queue " .. queue .. " - allowing request!")
return true
end
end
end)
Related
I am looking for a "full NGINX solution", withou intermediary redirection... But I need some external processing by "my Name-Resolver", as illustred with this execute fantasy:
server {
server_name resolver.mydomain.com;
execute xx = http://localhos:123456/myNameResolver/$request_uri;
rewrite ^ http://www.adifferentdomain.com$xx? permanent;
}
So, is possible to do something like this? perhaps using a kind of fastcgi_pass but only to return a string, not to bypass all HTTP resolution.
Well, you can use HttpLuaModule, which can execute commands and store them in variables if needed.
location / {
server_name resolver.mydomain.com;
# Get response via lua script.
set_by_lua_file $xx 'resolver-script.lua' $request_uri;
rewrite ^ http://www.adifferentdomain.com$xx? permanent;
}
You just need a Lua script to do your request for you, try something like this using your $request_uri as arg[1], because it is being considered as a command line argument
I want to redirect an old path to a new one without changing the url and keeping the suffix of the url for the redirection:
dev-osm.blah.com/NYC/v2/site/links?key=a12345
redirect to
dev-osm.blah.com/NYC/v1/site/links2?key=a12345
server {
server_name dev-osm.blah.com
.
.
location ^~ /NYC/v2/site/links {
rewrite ^ /NYC/v1/site/links2
}
Use something like the following in your server config
location /NYC/v2
proxy_pass: /NYC/v1/
I'm using proftpd on Debian 7.
I need to jail each user in their own home directory, so they can't see and access parent folders.
Actually each user is logged in his own homedir but they can see and access parent folders.
As you can see below, I have already tried DefaultRoot ~ developers and also DefaultRoot ~ .
How can I jail each user in their own home directory, so they can't see and access parent folders?
This is my proftpd.conf
#
# /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file.
# To really apply changes, reload proftpd after modifications, if
# it runs in daemon mode. It is not required in inetd/xinetd mode.
#
# Includes DSO modules
Include /etc/proftpd/modules.conf
# Set off to disable IPv6 support which is annoying on IPv4 only boxes.
UseIPv6 on
# If set on you can experience a longer connection delay in many cases.
IdentLookups off
ServerName "Debian"
ServerType standalone
DeferWelcome off
MultilineRFC2228 on
DefaultServer on
ShowSymlinks on
TimeoutNoTransfer 600
TimeoutStalled 600
TimeoutIdle 1200
DisplayLogin welcome.msg
DisplayChdir .message true
ListOptions "-l"
DenyFilter \*.*/
# Use this to jail all users in their homes
DefaultRoot ~ developers
#DocumentRoot ~
# Users require a valid shell listed in /etc/shells to login.
# Use this directive to release that constrain.
# RequireValidShell off
# Port 21 is the standard FTP port.
Port 21
# In some cases you have to specify passive ports range to by-pass
# firewall limitations. Ephemeral ports can be used for that, but
# feel free to use a more narrow range.
# PassivePorts 49152 65534
# If your host was NATted, this option is useful in order to
# allow passive tranfers to work. You have to use your public
# address and opening the passive ports used on your firewall as well.
# MasqueradeAddress 1.2.3.4
# This is useful for masquerading address with dynamic IPs:
# refresh any configured MasqueradeAddress directives every 8 hours
<IfModule mod_dynmasq.c>
# DynMasqRefresh 28800
</IfModule>
# To prevent DoS attacks, set the maximum number of child processes
# to 30. If you need to allow more than 30 concurrent connections
# at once, simply increase this value. Note that this ONLY works
# in standalone mode, in inetd mode you should use an inetd server
# that allows you to limit maximum number of processes per service
# (such as xinetd)
MaxInstances 30
# Set the user and group that the server normally runs at.
User proftpd
Group nogroup
# Umask 022 is a good standard umask to prevent new files and dirs
# (second parm) from being group and world writable.
Umask 022 022
# Normally, we want files to be overwriteable.
AllowOverwrite on
# Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords:
# PersistentPasswd off
# This is required to use both PAM-based authentication and local passwords
# AuthOrder mod_auth_pam.c* mod_auth_unix.c
# Be warned: use of this directive impacts CPU average load!
# Uncomment this if you like to see progress and transfer rate with ftpwho
# in downloads. That is not needed for uploads rates.
#
# UseSendFile off
TransferLog /var/log/proftpd/xferlog
SystemLog /var/log/proftpd/proftpd.log
# Logging onto /var/log/lastlog is enabled but set to off by default
#UseLastlog on
# In order to keep log file dates consistent after chroot, use timezone info
# from /etc/localtime. If this is not set, and proftpd is configured to
# chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight
# savings timezone regardless of whether DST is in effect.
#SetEnv TZ :/etc/localtime
<IfModule mod_quotatab.c>
QuotaEngine off
</IfModule>
<IfModule mod_ratio.c>
Ratios off
</IfModule>
# Delay engine reduces impact of the so-called Timing Attack described in
# http://www.securityfocus.com/bid/11430/discuss
# It is on by default.
<IfModule mod_delay.c>
DelayEngine on
</IfModule>
<IfModule mod_ctrls.c>
ControlsEngine off
ControlsMaxClients 2
ControlsLog /var/log/proftpd/controls.log
ControlsInterval 5
ControlsSocket /var/run/proftpd/proftpd.sock
</IfModule>
<IfModule mod_ctrls_admin.c>
AdminControlsEngine off
</IfModule>
#
# Alternative authentication frameworks
#
#Include /etc/proftpd/ldap.conf
#Include /etc/proftpd/sql.conf
#
# This is used for FTPS connections
#
#Include /etc/proftpd/tls.conf
#
# Useful to keep VirtualHost/VirtualRoot directives separated
#
#Include /etc/proftpd/virtuals.conf
# A basic anonymous configuration, no upload directories.
# <Anonymous ~ftp>
# User ftp
# Group nogroup
# # We want clients to be able to login with "anonymous" as well as "ftp"
# UserAlias anonymous ftp
# # Cosmetic changes, all files belongs to ftp user
# DirFakeUser on ftp
# DirFakeGroup on ftp
#
# RequireValidShell off
#
# # Limit the maximum number of anonymous logins
# MaxClients 10
#
# # We want 'welcome.msg' displayed at login, and '.message' displayed
# # in each newly chdired directory.
# DisplayLogin welcome.msg
# DisplayChdir .message
#
# # Limit WRITE everywhere in the anonymous chroot
# <Directory *>
# <Limit WRITE>
# DenyAll
# </Limit>
# </Directory>
#
# # Uncomment this if you're brave.
# # <Directory incoming>
# # # Umask 022 is a good standard umask to prevent new files and dirs
# # # (second parm) from being group and world writable.
# # Umask 022 022
# # <Limit READ WRITE>
# # DenyAll
# # </Limit>
# # <Limit STOR>
# # AllowAll
# # </Limit>
# # </Directory>
#
# </Anonymous>
# Include other custom configuration files
Include /etc/proftpd/conf.d/
<Global>
AccessGrantMsg "Benvenuto sul server demo Up3Up! Ricordati di fare sempre un backup di cio' che modifichi!"
AccessDenyMsg "Accessi al server FTP demo di Up3Up errati!"
</Global>
This is how I create users and set their own homedir
#!/bin/bash
echo "Procedura per la creazione di un utente FTP . . ."
#Chiedo il nome dell'account
read -p "Inserisci il nome (senza #up3up.net): " user
#Chiedo il percorso
echo "Percorso per $user # up3up.net (senza /var/www/up3upn/public_html/)"
read -p "Inserisci il percorso: " percorso
#Se non esiste il percorso lo creo
mkdir /var/www/up3upn/public_html/"$percorso" &> /dev/null
#Avverto che verra' chiesta la password
echo "Inserisci la password in chiaro per $user # up3up.net"
#Creo l'account' e chiedo la password
useradd -d /var/www/up3upn/public_html/"$percorso" "$user" &> /dev/null
usermod -m -d /var/www/up3upn/public_html/"$percorso" "$user" &> /dev/null
useradd -G developers "$user" &> /dev/null
passwd "$user"
echo "Account creato $user # up3up.net con percorso /var/www/up3upn/public_html/$percorso"
#Riavvio il servizio FTP
service proftpd restart &> /dev/null
you have to create a Group (like ftpjail) ald add all users that should be jailed to this group.
Then add the line to your proftpd.conf (must not be at end of file):
DefaultRoot ~ ftpjail # this must be a group!
now restart your FTP-Server and now the users are chrooted and jailed!
I had the same problem.. I found out that the DefaultRoot ~ developers line needs to be at the end of the config file..
I want to create web service for my Phonegap Android application which will further call progress 4GL 91.D procedure.
Does any one knowy idea how to create web service for this.
That will be a struggle. You CAN create a server that listens to a socket but you will have to handle everything yourself!
Look at this example.
However, you are likely better off writing the webservice in a language with a better support and then finding another way of getting the data out of the DB. If youre really stuck with a 10+ year old version you really should consider migrating to something else.
You don't have to upgrade everything -- you could just obtain a license for a version 10 client. V10 clients can connect to v9 databases (the rule is that the client can be up to one major release higher) so you could use that to build a SOAP service. Or you could get a v10 "webspeed" license.
Or you could write a simple enough CGI wrapper to some 4GL code if you have those sorts of skills. I occasionally toss together something like this:
#!/bin/bash
#
LOGFILE=/tmp/myservice.log
SVC=sample
# if a FIFO does not exist for the specified service then create it in /tmp
#
# $1 = direction -- in or out
# $2 = unique service name
#
pj_fifo() {
if [ ! -p /tmp/$2.$1 ]
then
echo `date` "Creating FIFO $2.$1" >> ${LOGFILE}
rm -f /tmp/$2.$1 >> ${LOGFILE} &2>&1
/bin/mknod -m 666 /tmp/$2.$1 p >> ${LOGFILE} &2>&1
fi
}
if [ "${REQUEST_METHOD}" = "POST" ]
then
read QUERY_STRING
fi
# header must include a blank line
#
# we're returning XML
#
echo "Content-type: text/xml" # or text/html or text/plain
echo
# debugging echo...
#
# echo $QUERY_STRING
#
# echo "<html><head><title>Sample CGI Interface</title></head><body><pre>QUERY STRING = ${QUERY_STRING}</pre></body></html>"
# ensure that the FIFOs exist
#
pj_fifo in $SVC
pj_fifo out $SVC
# make the request
#
echo "$QUERY_STRING" > /tmp/${SVC}.in
# send the response back to the requestor
#
cat /tmp/${SVC}.out
# all done!
#
echo `date` "complete" >> ${LOGFILE}
Then you just arrange for a background session to be reading /tmp/sample.in:
/* sample.p
*
* mbpro dbname -p sample.p > /tmp/sample.log 2>&1 &
*
*/
define variable request as character no-undo.
define variable result as character no-undo.
input from value( "/tmp/sample.in" ).
output to value( "/tmp/sample.out" ).
do while true:
import unformatted request.
/* parse it and do something with it... */
result = '<?xml version="1.0"?>~n<status>~n'.
result = result + "ok". /* or whatever turns your crank... */
result = result + "</status>~n".
end.
When input arrives parse the line and do whatever. Spit the answer back out to /tmp/sample.out and loop. It's not very fancy but if your needs are modest it is easy to do. If you need more scalability, robustness or security then you might ultimately need something more sophisticated but this will at least let you get started prototyping.
I have a lighttpd 1.4.26 (ssl) configuration on a cent-os linux machine serving an HTML5 media application over HTTPS.
My goal is to serve media files via the application over HTTP from the same webserver.
If the webserver were located at https://www.media.com/ and all the media is located in various subfolders of http://www.media.com/sharedmedia/XXXXX, and I have relative links to any media file in the html for the pages served over http, then I want all requests to .mp3, .mp4, .webm, and .ogv files to be redirected to the EXACT SAME URL but using http instead of https...
My problem is I do not know how to write a url.redirect rule to perform this translation...
I have tried:
url.redirect = ( "https://^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1" )
And when I visit this URL:
https://www.media.com/sharedmedia/X-MAC-MINI/Sports/Amazing%20Football%20Skills%20and%20Tricks.ogv
I am 301 permanently redirected to
http://www.media.com/sharedmedia/X-MAC-MINI/Sports/Amazing0Football0Skills0and0Tricks.ogv
Which is then also 301'ed to:
http:///sharedmedia/AFFINEGY-MAC-MINI/Sports/Amazing0Football0Skills0and0Tricks
Notice that the %20's that were in the very first url (urlencoded SPACE) were dropped from the URL leaving the trailing '0' in each case during the first redirect (I assume interpreted as %2 which holds an empty string), and that the http request is ALSO redirected erroniously to another URL that doesn't even contain the host value (www.media.com). Also, the extension is left off the second redirect...
I then tried a conditional version after that:
$HTTP["socket"] =~ ":443$"
{
url.redirect = ( "^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1" )
}
..which results in lighttpd simply crashing on startup, so I can't even test it. Lighttpd startup error message follows:
Starting lighttpd: 2011-08-31 16:19:15: (configfile.c.907) source: find /etc/lighttpd/conf.d -maxdepth 1 -name '*.conf' -exec cat {} \; line: 44 pos: 1 parser failed somehow near here: (EOL)
2011-08-31 16:19:15: (configfile.c.907) source: /etc/lighttpd/lighttpd.conf line: 331 pos: 1 parser failed somehow near here: (EOL)
Any ideas what I'm doing wrong?
Here's what I did wrong:
You need a nested "IF" to use the %n notation in the destination IP rule...
I had
$HTTP["socket"] =~ ":443$"
{
url.redirect = ( "^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1" )
}
But I needed
$HTTP["socket"] =~ ":443$" {
$HTTP["host"] == (.*) {
url.redirect = ( "^(.*)\.(ogv|mp4|mp3|webm)$" => "http://%1/$1.$2" )
}
}
Now... the %1 in "http://%1/$1" refers to the match in "$HTTP["host"] == (.*)" while $1 refers to the match in the url redirect parens, and $2 refers to the match for the second set of parens in the url redirect source...
Is it just me or is this shit TOTALLY undocumented? I can only find on google people complaining of not being able to get this to work and no one seems to have a single answer for how it works...
I'm now stuck making the EXACT same thing happen in APACHE, and I can't find a good documentation on .htaccess 301 redirects for it either...