How to restrict HAProxy to serve only specific URI? - haproxy

I’m having some issues with an ACL that’s not working as intended. I have a service and I want to only allow very specific paths to be accessed. For example I want to allow access to www.mysite.com/hello but not www.mysite.com/bye. However, I’m getting 403 forbidden even on /hello. Can someone help me with the syntax? For example, if I wanted to grant access to only these resources:
www.mysite.com/hello
www.mysite.com/images
www.mysite.com/page?id=parameters
www.mysite.com/page?id=ok
www.mysite.com/page?id=test
I created the below:
acl myhost_host hdr(host) -i www.mysite.com
acl myhost_allowed_uri_paths path_beg,url_dec -i -m beg /hello | /images
acl myhost_allowed_uri_pages path_beg,url_dec -i -m beg /page
acl myhost_allowed_parm urlp(id) parameters | test | ok
http-request deny if myhost_host !myhost_allowed_uri_paths
http-request deny if myhost_host !myhost_allowed_uri_pages !myhost_allowed_parm

Related

Exclude ".html" from URL's in Google Cloud Storage

Is it possible to some how get www.mydomain.com/testing to serve www.mydomain.com/testing.html using Google Cloud Storage?
With htaccess I use:
RewriteCond %{SCRIPT_FILENAME} !-d
RewriteRule ^([^\.]+)$ $1.html [NC,L]
jterrace wrote:
You can serve HTML content without a .html suffix. It's important to
set the content-type though. For example, with gsutil:
gsutil -h "Content-Type:text/html" cp /path/users/userProfile \
gs://www.website.com/userProfile
No. You can specify which object represents the root of your website (for instance, serving index.html when the user goes to www.domain.com/), but that is a special case. Arbitrary URL rewrite rules are not supported.
The values you can configure are documented here: https://cloud.google.com/storage/docs/website-configuration#step4

Child URL Access but Domain is Blocked

I need some help configuring squid.
I have to block access to facebook.com but I need to let a local user to be able to access a Child URL like Facebook.com/ChildURL
I already made my research and found an answer:
http://www.linuxquestions.org/questions/linux-server-73/squid-acl-to-allow-access-to-a-child-url-below-a-blocked-main-url-907740/
acl good_facebook url_regex -i ^https://www.facebook.com/pages/custfbpagedetail
acl ref referer_regex -i custfbpagedetail
http_access allow good_facebook
http_access deny denied_domains !ref
But unfortunately, it's not working. Can anybody help me?

301 Permanent Redirect, Redirecting Incorrectly?

I'm trying to redirect traffic that attempts to access:
cat.rs.allgames.com/cloud/11/services/titles/game5/console/file.json
to:
cat.cloud.allgames.com/titles/game5/console/file.json
using a 301 permanent redirect in an .htaccess file, but it always sends me to:
cat.cloud.allgames.comcloud/titles/game5/console/cloud/11/services/titles/game5/console/file.json
which is nowhere near correct. What am I doing wrong?
My .htaccess is located in:
cat.rs.allgames.com/cloud/11/services/titles/game5/console/file.json
and looks like this:
Redirect 301 / http://cat.cloud.allgames.com
The Rewrite module should do what you want! You need to make sure the rewrite module is enabled (for ubuntu: sudo a2enmod rewrite), then add this to your .htaccess file.
RewriteEngine On
RewriteRule ^/cloud/11/services/titles/(.*)$ http://cat.cloud.allgames.com/titles/$1 [R=301]
Here, the ^/cloud/11/services/titles/(.*)$ catches anything after the titles part of the URL. Then the $1 at the end of the second part attaches what we caught to the end of the new URL. Finally, you can specify it as an external 301 rewrite to another server with the [R=301] flag!
Hope that works for you!

gsutil setwebcfg -m index.html -e 404.html doesn't work

I'm having a strange problem with making a bucket on google cloud storage open to the public (as a static website).
I created a bucket called fixparser.targetcompid.com. I followed google's procedure of adding an identifying html file to my existing host.
I am able to copy my htmls/css/js/etc. into the bucket and even view the index.html page when I provide the full url:
http://commondatastorage.googleapis.com/fixparser.targetcompid.com/index.html
However, I can't get the index file if I only provide the general website address:
http://commondatastorage.googleapis.com/fixparser.targetcompid.com
Following is what I see when I set MainPageSuffix and NotFoundPage:
$ ./gsutil setwebcfg -m index.html -e 404.html gs://fixparser.targetcompid.com
Setting website config on gs://fixparser.targetcompid.com/...
$ ./gsutil setwebcfg -m index.html -e 404.html gs://fixparser.targetcompidxxxx.com
Setting website config on gs://fixparser.targetcompidxxxx.com/...
GSResponseError: status=404, code=NoSuchBucket, reason=Not Found.
$ ./gsutil getwebcfg gs://fixparser.targetcompid.com
Getting website config on gs://fixparser.targetcompid.com/...
-WebsiteConfiguration>
-MainPageSuffix>
index.html
-/MainPageSuffix>
-NotFoundPage>
404.html
-/NotFoundPage>
-/WebsiteConfiguration>
(I don't know how else to format this xml snippet)
The Google Cloud Storage website configuration will only affect requests directed to CNAME aliases of c.storage.googleapis.com. In this particular example, you probably want to set up a CNAME alias for fixparser.targetcompid.com to point to c.storage.googleapis.com. Once you do that, opening http://fixparser.targetcompid.com will load the index.html page you set up with the gsutil setwebcfg command.
Mike Schwartz, Google Cloud Storage Team

Secure pseudo-streaming flv files

We use RTMP to secure stream media content through Wowza and it works like a charm. Wowza is really strong and robust media-server for a business purpose.
But we met a problem, it's getting bigger every day for us. A lot of new customers can't use RTMP by their firewall rules, and it's a problem to deliver a business media content for them.
But everybody has no problems with http pseudo-streaming or just progressive, like it does youtube or vimeo.
So we should do the same, but provide secure links to pseudo-streaming traffic, to prevent a direct download by stealing the links.
We use few servers, one for Rails app, the second for DB, and third as Wowza media server.
My thinking is to setup nginx on Wowza media server and configure to pseudo-stream media originally files (in the same filesystem that Wowza uses to stream through webcam capture).
Can you suggest to use nginx with http_secure_link_module and http_flv_module modules?
Another idea by my colleague is to build a tiny application on Wowza side to get encrypted links and translate it to local file system, then get access to files through X-Accel-Redirect and check authentication via direct connection to DB.
Thanks a lot
I have found a solution, let me share with anyone interested in it.
First of all, my constraints was to use the minimum tools as possible, so ideally to have built-in module in web-server only, no upstream backend scripts. And I have a solution now.
server {
listen 8080 ssl;
server_name your_server.com;
location /video/ {
rewrite /video/([a-zA-Z0-9_\-]*)/([0-9]*)/(.*)\.flv$ /flv/$3.flv?st=$1&e=$2;
}
location /flv/ {
internal;
secure_link $arg_st,$arg_e;
secure_link_md5 YOUR_SECRET_PASSWORD_HERE$arg_e$uri;
if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 403; }
root /var/www/;
flv;
add_header Cache-Control 'private, max-age=0, must-revalidate';
add_header Strict-Transport-Security 'max-age=16070400; includeSubdomains';
}
}
The real flv files located into "/var/www/flv" directory. To encrypt the URL on Ruby side, you can use that script:
expiration_time = (Time.now + 2.hours).to_i # 1326559618
s = "#{YOUR_SECRET_PASSWORD_HERE}#{expiration_time}/flv/video1.flv"
a = Base64.encode64(Digest::MD5.digest(s))
b = a.tr("+/", "-_").sub('==', '').chomp # HLz1px_YzSNcbcaskzA6nQ
# => "http://your_server.com:8080/video/#{b}/#{expiration_time}/video1.flv"
So the secured 2-hours URL (you can put it into flash-player) looks like:
"http://your_server.com:8080/video/HLz1px_YzSNcbcaskzA6nQ/1326559618/video1.flv"
P.S. Nginx should be compiled with following options --with-http_secure_link_module --with-http_flv_module
$ cd /usr/src
$ wget http://nginx.org/download/nginx-1.2.2.tar.gz
$ tar xzvf ./nginx-1.2.2.tar.gz && rm -f ./nginx-1.2.2.tar.gz
$ wget http://zlib.net/zlib127.zip
$ unzip zlib127.zip && rm -f zlib127.zip
$ wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.30.tar.gz
$ tar xzvf pcre-8.30.tar.gz && rm -f ./pcre-8.30.tar.gz
$ wget http://www.openssl.org/source/openssl-1.0.1c.tar.gz
$ tar xzvf openssl-1.0.1c.tar.gz && rm -f openssl-1.0.1c.tar.gz
$ cd nginx-1.2.2 && ./configure --prefix=/opt/nginx --with-pcre=/usr/src/pcre-8.30 --with-zlib=/usr/src/zlib-1.2.7 --with-openssl-opt=no-krb5 --with-openssl=/usr/src/openssl-1.0.1c --with-http_ssl_module --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --with-http_stub_status_module --with-http_secure_link_module --with-http_flv_module
$ make && make install
JW player and Flowplayer will automatically fall back to RTMPT (over HTTP) when an RTMP connection is unsuccessful, and Wowza makes both available. I've encountered port 1935 blocked at several locations, and the fallback to RTMPT over port 80 generally works. The caveat there, of course, is that you have to have Wowza listening on port 80 (in the VHost.xml where 1935 is defined, change it to 80,1935), and that precludes having any kind of web server listening on the same port.
We use Wowza with port 80 with our clients