Cache Static Contents in HAProxy - haproxy

Is there any suitable way for caching in HAProxy. I want to cache only the static files (css, js, images). So, my initial plan was to do something like below
backend app_backend
mode http
balance source
option httpchk GET /heartbeat
http-check expect status 200
http-request cache-use mycache if { path -i -m beg /html/ } || { path -i -m beg /css/ } || { path -i -m beg /favicon.ico } || { path -i -m beg /assets/ } || { path -i -m beg /images/ } || { path -i -m beg /js/ }
http-response cache-store mycache
option httpclose
But I don't see anything is getting cached in my Proxy server.
What am I doing wrong?

Related

haproxy limit access to root path for specific ip range, but allow from anywhere for specific subdirectory

I'm using HA-Proxy version 1.8.19
I want to restrict access from external or allow access only for specific IP Range to my website https://testxy.com/, but want to allow access from anywhere to subfolder https://testxy.com/tempDownload/.
I tried it already with following:
http-request deny if { path -i -m beg / } !{ src 10.10.20.0/24 }
How can I do that?
http-request allow if { path_dir -i /tempDownload } { src 0.0.0.0/0 }
http-request allow if { path_dir -i /xy1 } { src 10.10.20.0/24 }
http-request allow if { path_dir -i /xy2 } { src 10.10.20.0/24 }
http-request deny if { path_dir -i -m beg / } !{ src 10.10.20.0/24 }
This solved my problem (if anyone else has the same question)
I would use the exact match instead of beg path
The -i is here also useless as there is no lower- or upper-case version of / ACL Flags
http-request deny if { path / } !{ src 10.10.20.0/24 }

nginx redirection depending on host

I have two domains website1.com and website2.com linked to my server.
I'm trying to do the following rewrite rules:
http://website1.com/ --> /website1/ (static)
http://website2.com/ --> /website2/ (static)
http://website1.com/app/ --> http://localhost:8080/web-app/web1/
http://website2.com/app/ --> http://localhost:8080/web-app/web2/
The user will be redirected to a static website served by nginx or an application server depending on the url.
Here's what I tried so far:
location / {
root html;
index index.html index.htm;
if ($http_host = website1.com) {
rewrite / /website1/index.html break;
rewrite (.*) /website1/$1;
}
if ($http_host = website2.com) {
#same logic
}
}
location /app/ {
proxy_pass http://localhost:8080/web-app/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
if ($http_host = website1.com) {
rewrite /app/(.*) /$1 break;
rewrite /app /index.html;
}
if ($http_host = website2.com) {
#same logic
}
}
The static part seems to work fine, but the redirection web app part seems to serve index.html no matter what the requested file is.
This is not much of a definitive answer, but rather just my explanation of how I get nginx proxies to work.
root html;
index index.html index.htm;
server {
listen 80;
server_name website1.com;
location / {
alias html/website1;
}
location /app/ {
proxy_pass http://localhost:8080/web-app/web1/
}
}
server {
listen 80;
server_name website2.com;
location / {
alias html/website2;
}
location /app/ {
proxy_pass http://localhost:8080/web-app/web2/
}
}
The issue looks like it's being caused by these rewrites:
rewrite /app/(.*) /$1 break;
rewrite /app /index.html;
Using server blocks with server_names and the alias directive, we can do away with needing to use that much logic. Let me know if there's anything that is still not clear.
I think you're doing it wrong. If there is so much difference between the hosts, it would be cleaner and more efficient to have two distinct configurations, one for each host.
On the other hand, if your intention is to have multiple almost-identical configurations, then the correct way to go about it might be map, and not if.
Back to your configuration — I've tried running it just to see how it works, and one thing that you may notice is that the path you specify within the proxy_pass effectively becomes a noop once the $host-specific rewrite within the same context gets involved to change the $uri — this is by design, and is very clearly documented within http://nginx.org/r/proxy_pass ("When the URI is changed inside a proxied location using the rewrite directive").
So, in fact, using the following configuration does appear to adhere to your spec:
%curl -H "Host: host1.example.com" "localhost:4935/app/"
host1.example.com/web-app/web1/
%curl -H "Host: host2.example.com" "localhost:4935/app/"
host2.example.com/web-app/web2/
%curl -H "Host: example.com" "localhost:4935/app/"
example.com/web-app/
Here's the config I've used:
server {
listen [::]:4935;
default_type text/plain;
location / {
return 200 howdy;
}
location /app/ {
proxy_set_header Host $host;
proxy_pass http://localhost:4936/web-app/;#path is NOOP if $uri get changed
if ($host = host1.example.com) {
rewrite /app/(.*) /web-app/web1/$1 break;
rewrite /app /web-app/index.html;
}
if ($host = host2.example.com) {
rewrite /app/(.*) /web-app/web2/$1 break;
rewrite /app /web-app/index.html;
}
}
}
server {
listen [::]:4936;
return 200 $host$request_uri\n;
}

REST interface using nginx httpluamodule

I'm just getting started with nginx and the HttpLuaModule. I've created a test configuration file to familiarize myself with how things work.
now I'm trying to write some logic to accept GET, POST and DELETE requests for a specific type of resource.
I would like to create a "location" entry that would match the following URI / accept the following curl calls:
curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
This is what my current nginx.conf looks like:
server {
listen 80;
server_name nsps2;
root /var/www/;
index index.html index.htm;
#charset koi8-r;
#access_log logs/host.access.log main;
#curl http://localhost/hello?name=johndoe
location /hello {
default_type "text/plain";
content_by_lua '
local rquri = ngx.var.request_uri;
ngx.say("the uri is ", rquri ,".")
local name = ngx.var.arg_name or "Anonymous"
ngx.say("Hello, ", name, "!")
';
}
location / {
root /var/www/;
index index.html index.htm;
}
#curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
location /widgets/widget {
root /var/www/widgets;
default_type "text/pain";
content_by_lua '
local arga,argb = ngx.arg[1], ngx.arg[2] ;
ngx.say("the arga is ", arga ,".")
ngx.say("the argb is ", argb, ".")
';
}
Using the last "location" entry, I'm trying to
1. prove that the system is getting the GET request
2. prove that I understand how to access the parameters passed in with the GET request.
I'm getting an error right now that looks like this:
2015/02/24 20:18:19 [error] 2354#0: *1 lua entry thread aborted: runtime error: content_by_lua:2: API disabled in the context of content_by_lua*
stack traceback:
coroutine 0:
[C]: in function '__index'
content_by_lua:2: in function <content_by_lua:1>, client: 127.0.0.1, server: nsps2, request: "GET /widgets/widget?name=testname?loc=20000 HTTP/1.1", host: "localhost"
I'm not too sure about what this error means / is trying to tell me.
Any tips would be appreciated.
Thank you.
EDIT 1
I think the problem was something syntactically wrong. I was reading the manual and found an example to try. I've changed the code to look like this:
location /widgets/widget {
default_type "text/pain";
content_by_lua '
local args = ngx.req.get_uri_args()
for key, val in pairs(args) do
if type(val) == "table" then
ngx.say(key, ": ", table.concat(val, ", "))
else
ngx.say(key, ": ", val)
end
end
';
}
Now when I call the app like this:
mytestdevbox2:/var/www/nsps2# curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000 -H "Accept:application/json"
-ash: -H: not found
mytestdevbox2:/var/www/nsps2# HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 24 Feb 2015 21:32:44 GMT
Content-Type: text/pain
Transfer-Encoding: chunked
Connection: keep-alive
name: testname
[1]+ Done curl -i -X GET http://localhost/widgets/widget?name=testname
After the system displays the "name: testname" stuff, it just sits there until I hit "enter". After I do that, then it proceeds to display the 1+ stuff.
I'm not too sure what it's doing.
EDIT 2:
Adding quotes to the curl call did fix the problem:
mytestdevbox2:/var/www/nsps2# curl -i -X GET 'http://localhost/widgets/widget?name=testname&loc=20000'
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Wed, 25 Feb 2015 12:59:29 GMT
Content-Type: text/pain
Transfer-Encoding: chunked
Connection: keep-alive
loc: 20000
name: testname
mytestdevbox2:/var/www/nsps2#
The problem described in EDIT 1 doesn't have anything to do with your question about nginx and is caused by not quoting the parameters for curl when you execute the command:
curl -i -X GET http://localhost/widgets/widget?name=testname&loc=20000
This is executed as two commands separated by '&': curl -i -X GET http://localhost/widgets/widget?name=testname and loc=20000; that's why you see the output after you already got the prompt back as the first command is now executed in the background. "1+ Done" message is a confirmation that the background process is terminated; it's just shown after you press Enter.
Wrap the URL with the query string in quotes and you should see the expected behavior.

mIRC socket inconsistently not working

I'm currently working on a new uptime command for my Twitch bot and I'm having problems with sockets. I'm trying to use this page http://api.twitch.tv/api/channels/duilio1337 to check if a streamer is online. It works fine sometimes but other times it reads the page header but not the page. Seems to be random.
alias uptimecheck {
echo -a CHECKING CHANNEL $1 STATUS
%uptimeurl = /api/channels/ $+ $1
%uptimecheckchan = $1
sockopen wuptime api.twitch.tv 80
}
on *:sockopen:wuptime: {
sockwrite -n $sockname GET %uptimeurl HTTP/1.0
sockwrite -n $sockname User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
sockwrite -n $sockname Host: api.twitch.tv
sockwrite -n $sockname Accept-Language: en-us
sockwrite -n $sockname Accept: */*
sockwrite -n $sockname
}
on *:sockread:wuptime: {
if ($sockerr) {
echo -a UPTIME Connection Error.
halt
}
else {
sockread -f %uptimeliverail
if ($sockbr == 0) return
;echo -a %uptimeliverail
}
}
on *:sockclose:wuptime: {
var %TEMPliverail $mid(%uptimeliverail, $calc($pos(%uptimeliverail, "twitch_liverail_id") + 21), 4)
if (%TEMPliverail == null) {
set %uptimelive. $+ %uptimecheckchan 0
}
else if (%TEMPliverail isnum) {
set %uptimelive. $+ %uptimecheckchan 1
}
}
The content of http://api.twitch.tv/api/channels/duilio1337 is just:
{"display_name":"duilio1337","game":null,"status":"Test","fight_ad_block":false,"_id":47890122,"name":"duilio1337","partner":false,"comscore_id":null,"comscore_c6":null,"twitch_liverail_id":null,"hide_ads_for_subscribers":false,"liverail_id":null,"ppv":false,"video_banner":null,"steam_id":null,"broadcaster_software":"unknown_rtmp","prerolls":true,"postrolls":true,"product_path":""}
It is not a HTML page, so what you mean with: it reads the page header but not the page. ?
Maybe you can explain better your problem.
Anyway... i think for your purposes in your sockopen event it should be enough to send GET and Host headers:
on *:sockopen:wuptime: {
sockwrite -nt $sockname GET %uptimeurl HTTP/1.1
sockwrite -nt $sockname Host: api.twitch.tv
sockwrite $sockname $crlf
}
In HTTP/1.0 the Host is not needed, but doesn't hurt. In HTTP/1.1 is required.
The -t parameter is recommended to avoid & to be treated as binary variable.
The sockwrite $sockname $crlf is a more explicit way to finish the HTTP request, and is equivalent (IIRC) to your sockwrite -n $sockname (because -n adds a $crlf).
Also, your sockread event in general should look something like:
on *:sockread:wuptime: {
if ($sockerr) {
;error msg code here
return
}
var %data
sockread %data
;your code here
unset %data
}
Avoid halt, that command is used to immediately stop any further processing. In most cases is more appropriate to use return.
Avoid unnecessary if-else conditionals.
And is important to note, that each time the server replies with a line, the sockread event is initiated.
The server is most likely returning
Vary: Accept Encoding
in the response header. Try replacing your Accept */* line with
sockwrite -n $sockname Accept: gzip
and see what happens. It will most likely return compressed binary data (rather than timing out, or returning nothing at all). The problem is, mIRC can't decompress this and doesn't support gzip decoding. I ran into this exact same problem last night and here is how I solved it for anyone that is working with mIRC sockets and needs to fetch gzip'd content.
First make sure you change the header like I have above.. to Accept: gzip
Second, grab a copy of 7za (7-zip for windows console) from http://www.7-zip.org/a/7za920.zip. Extract 7za and put it in the c:\program files (x86)\mIRC\ folder, or wherever your mirc.exe resides.
I've modified the OP's *on:sockread: block to fetch and extract the gzip'd content below:
on *:sockread:wuptime:{
var %bare $rand(423539,999999)
var %gz 1
:play
var %dg
sockread %dg
:wile
if ( %gz = 2 ) {
sockread -f &gzd
}
if ($sockerr) {
return
}
if ( %gz = 2 ) {
while ($sockbr) {
bwrite %bare -1 -1 &gzd
sockread -f &gzd
}
}
if ($sockbr == 0) {
sockclose $sockname
echo -at closed
run -h 7za.exe e %bare
.timer 1 2 wuptimeparse %bare
return
}
if ( %dg = $null) {
var %gz 2
goto wile
}
goto play
}
This is probably pretty crude, but it starts bwriteing to a random file once the headers are finished returning, and once $sockbr is empty, it runs 7zip on the compressed binary data. The alias below will be called in a couple of seconds (time for 7zip to do its magic.. you can probably do this in a far more elegant way im sure) to spit out the plaintext JSON you're looking for. Be warned, if the JSON in your example is huge (4000+ characters) you will need to read it as a binvar with bread because the string will be too large to fit into a variable.
alias wuptimeparse {
var %file $1
var %bare $read(%file $+ ~,1)
echo %bare
;do something with %bare
.timer 1 1 remove %file
.timer 1 1 remove %file $+ ~
}

How to use cURL to send Cookies?

I read that sending cookies with cURL works, but not for me.
I have a REST endpoint like this:
class LoginResource(restful.Resource):
def get(self):
print(session)
if 'USER_TOKEN' in session:
return 'OK'
return 'not authorized', 401
When I try to access the endpoint, it refuses:
curl -v -b ~/Downloads/cookies.txt -c ~/Downloads/cookies.txt http://127.0.0.1:5000/
* About to connect() to 127.0.0.1 port 5000 (#0)
* Trying 127.0.0.1...
* connected
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.27.0
> Host: 127.0.0.1:5000
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 401 UNAUTHORIZED
< Content-Type: application/json
< Content-Length: 16
< Server: Werkzeug/0.8.3 Python/2.7.2
< Date: Sun, 14 Apr 2013 04:45:45 GMT
<
* Closing connection #0
"not authorized"%
Where my ~/Downloads/cookies.txt is:
cat ~/Downloads/cookies.txt
USER_TOKEN=in
and the server receives nothing:
127.0.0.1 - - [13/Apr/2013 21:43:52] "GET / HTTP/1.1" 401 -
127.0.0.1 - - [13/Apr/2013 21:45:30] "GET / HTTP/1.1" 401 -
<SecureCookieSession {}>
<SecureCookieSession {}>
127.0.0.1 - - [13/Apr/2013 21:45:45] "GET / HTTP/1.1" 401 -
What is it that I am missing?
This worked for me:
curl -v --cookie "USER_TOKEN=Yes" http://127.0.0.1:5000/
I could see the value in backend using
print(request.cookies)
You can refer to https://curl.haxx.se/docs/http-cookies.html for a complete tutorial of how to work with cookies. You can use
curl -c /path/to/cookiefile http://yourhost/
to write to a cookie file and start engine and to use cookie you can use
curl -b /path/to/cookiefile http://yourhost/
to read cookies from and start the cookie engine, or if it isn't a file it will pass on the given string.
You are using a wrong format in your cookie file. As curl documentation states, it uses an old Netscape cookie file format, which is different from the format used by web browsers. If you need to create a curl cookie file manually, this post should help you. In your example the file should contain following line
127.0.0.1 FALSE / FALSE 0 USER_TOKEN in
having 7 TAB-separated fields meaning domain, tailmatch, path, secure, expires, name, value.
curl -H #<header_file> <host>
Since curl 7.55 headers from file are supported with #<file>
echo 'Cookie: USER_TOKEN=Yes' > /tmp/cookie
curl -H #/tmp/cookie <host>
docs & commit
If you have made that request in your application already, and see it logged in Google Dev Tools, you can use the copy cURL command from the context menu when right-clicking on the request in the network tab. Copy -> Copy as cURL.
It will contain all headers, cookies, etc..
I'm using Debian, and I was unable to use tilde for the path. Originally I was using
curl -c "~/cookie" http://localhost:5000/login -d username=myname password=mypassword
I had to change this to:
curl -c "/tmp/cookie" http://localhost:5000/login -d username=myname password=mypassword
-c creates the cookie, -b uses the cookie
so then I'd use for instance:
curl -b "/tmp/cookie" http://localhost:5000/getData
Another solution using json.
CURL:
curl -c /tmp/cookie -X POST -d '{"chave":"email","valor":"hvescovi#hotmail.com"}' -H "Content-Type:application/json" localhost:5000/set
curl -b "/tmp/cookie" -d '{"chave":"email"}' -X GET -H "Content-Type:application/json" localhost:5000/get
curl -b "/tmp/cookie" -d '{"chave":"email"}' -X GET -H "Content-Type:application/json" localhost:5000/delete
PYTHON CODE:
from flask import Flask, request, session, jsonify
from flask_session import Session
app = Flask(__name__)
app.secret_key = '$#EWFGHJUI*&DEGBHYJU&Y%T#RYJHG%##RU&U'
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
#app.route('/')
def padrao():
return 'backend server-side.'
#app.route('/set', methods=['POST'])
def set():
resposta = jsonify({"resultado": "ok", "detalhes": "ok"})
dados = request.get_json()
try:
if 'chave' not in dados: # não tem o atributo chave?
resposta = jsonify({"resultado": "erro",
"detalhes": "Atributo chave não encontrado"})
else:
session[dados['chave']] = dados['valor']
except Exception as e: # em caso de erro...
resposta = jsonify({"resultado": "erro", "detalhes": str(e)})
resposta.headers.add("Access-Control-Allow-Origin", "*")
return resposta
#app.route('/get')
def get():
try:
dados = request.get_json()
retorno = {'resultado': 'ok'}
retorno.update({'detalhes': session[dados['chave']]})
resposta = jsonify(retorno)
except Exception as e:
resposta = jsonify({"resultado": "erro", "detalhes": str(e)})
resposta.headers.add("Access-Control-Allow-Origin", "*")
return resposta
#app.route('/delete')
def delete():
try:
dados = request.get_json()
session.pop(dados['chave'], default=None)
resposta = jsonify({"resultado": "ok", "detalhes": "ok"})
except Exception as e: # em caso de erro...
resposta = jsonify({"resultado": "erro", "detalhes": str(e)})
resposta.headers.add("Access-Control-Allow-Origin", "*")
return resposta
app.run(debug=True)
Here is an example for the correct way to send cookies. -H 'cookie: key1=val2; key2=val2;'
cURL offers a convenience of --cookie as well. Run man curl or tldr curl
This was copied from Chrome > inspect >network > copy as cURL.
curl 'https://www.example.com/api/app/job-status/' \
-H 'authority: www.example.com' \
-H 'sec-ch-ua: "Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.111.111 Safari/111.36' \
-H 'content-type: application/json' \
-H 'accept: */*' \
-H 'origin: https://www.example.com' \
-H 'sec-fetch-site: same-origin' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-dest: empty' \
-H 'referer: https://www.example.com/app/jobs/11111111/' \
-H 'accept-language: en-US,en;q=0.9' \
-H 'cookie: menuOpen_v3=true; imageSize=medium;' \
--data-raw '{"jobIds":["1111111111111"]}' \
--compressed
I am using GitBash on Windows and nothing I found worked for me.
So I settled with saving my cookie to a file named .session and used cat to read from it like so:
curl -b $(cat .session) http://httpbin.org/cookies
And if you are curious my cookie looks like this:
session=abc123