HAProxy Lua can't add action on <tcp-request connect> - haproxy

I have lua script for haproxy which check ip in whitelist and i need add this to haproxy config.
I need do this on tcp connection, and as the haproxy blog post says - i can do this.
tcp-request connection <action>
tcp-request content <action>
tcp-response content <action>
http-request <action>
http-response <action>
But if i use tcp-request connection lua.checkip - haproxy can't start with error message:
haproxy[9384]: [ALERT] 124/000121 (9384) : parsing [/etc/haproxy/haproxy.cfg:42] : 'tcp-request connection' expects 'accept', 'reject', 'track-sc0' ... 'track-sc2', 'sc-inc-gpc0(*)', 'sc-inc-gpc1(*)', 'sc-set-gpt0(*)', 'set-src', 'set-src-port', 'set-dst', 'set-dst-port', 'silent-drop' in frontend 'haproxy_rserve' (got 'lua.checkip').
But i can use tcp-request content lua.checkip and this working.
As far as I understand these 2 functions must have differences and for me best solution is connetion or can I use content ?
I'm trying to build a highly loaded system, so I would not want to screw up at the configuration stage.
This lines i try add to frontend:
tcp-request inspect-delay 5s
tcp-request connection lua.checkip
tcp-request connection reject if { var(req.blocked) -m bool }

A look at the source reveals that registering a Lua tcp-req action is indeed just for content, and there's no way to register a Lua action for a connection:
if (strcmp(lua_tostring(L, -1), "tcp-req") == 0)
tcp_req_cont_keywords_register(akl);
else if (strcmp(lua_tostring(L, -1), "tcp-res") == 0)
tcp_res_cont_keywords_register(akl);
else if (strcmp(lua_tostring(L, -1), "http-req") == 0)
http_req_keywords_register(akl);
else if (strcmp(lua_tostring(L, -1), "http-res") == 0)
http_res_keywords_register(akl);
else
WILL_LJMP(luaL_error(L, "Lua action environment '%s' is unknown. "
"'tcp-req', 'tcp-res', 'http-req' or 'http-res' "
"are expected.", lua_tostring(L, -1)));
The function that it'd need to call is tcp_req_conn_keywords_register, but unfortunately that's not exposed to Lua. The only callers of that function are for some hardcoded actions in proto_tcp.c and stick_table.c:
static struct action_kw_list tcp_req_conn_actions = {ILH, {
{ "set-src", tcp_parse_set_src_dst },
{ "set-src-port", tcp_parse_set_src_dst },
{ "set-dst" , tcp_parse_set_src_dst },
{ "set-dst-port", tcp_parse_set_src_dst },
{ "silent-drop", tcp_parse_silent_drop },
{ /* END */ }
}};
INITCALL1(STG_REGISTER, tcp_req_conn_keywords_register, &tcp_req_conn_actions);
static struct action_kw_list tcp_conn_kws = { { }, {
{ "sc-inc-gpc0", parse_inc_gpc0, 1 },
{ "sc-inc-gpc1", parse_inc_gpc1, 1 },
{ "sc-set-gpt0", parse_set_gpt0, 1 },
{ /* END */ }
}};
INITCALL1(STG_REGISTER, tcp_req_conn_keywords_register, &tcp_conn_kws);
However, I once again have two pieces of good news:
If you can put your whitelist in a file, you can use tcp-request connection reject unless { src -f /etc/haproxy/whitelist.lst } rather than needing to use Lua at all.
Nothing jumps out at me as a critical reason that this couldn't be supported from Lua, so it may be able to be added to a future release.

Related

Prevent nginx from killing idle tcp sockets

I'm trying to use nginx as a reverse proxy for ssl/tcp sockets (so that I can write my server custom as raw tcp, but have nginx handle the ssl certificates). My use case requires the tcp connections remain alive, but to go idle (no packets back and forth) for extended periods of time (determined by the client, but as long as an hour). Unfortunately, nginx kills my socket connections after the first 10 minutes (timed to within a second) of inactivity, and I haven't been able to find either online or in the docs what actually controls this timeout.
I know that it has to be nginx doing it (not my raw server timing out, or my client's ssl socket), since I can directly connect to the server's raw tcp server without timeout issues, but if I run nginx as a raw tcp reverse proxy (no ssl) it does timeout.
Here's some code to reproduce the issue, note that I've commented out the ssl relevent pieces in nginx because the timeout occurs either way.
/etc/nginx/modules-enabled/test.conf:
stream {
upstream tcp-server {
server localhost:33445;
}
server {
listen 33446;
# listen 33446 ssl;
proxy_pass tcp-server;
# Certs
# ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
}
}
server.js;
const net = require("net");
const s = net.createServer();
s.on("connection", (sock) => {
console.log('Got connection from', sock.remoteAddress, sock.remotePort );
sock.on("error", (err) => {
console.error(err)
clearInterval(i);
});
sock.on("close", () => {
console.log('lost connection from', sock.remoteAddress, sock.remotePort );
clearInterval(i);
});
});
s.listen(33445);
client.js
const net = require('net');
const host = 'example.com';
let use_tls = false;
let client;
let start = Date.now()
// Use me to circumvent nginx, and no timeout occurs
// let port = 33445;
// Use me to use nginx, and no timeouts occur after 10 mins of no RX/TX
let port = 33446;
client = new net.Socket();
client.connect({ port, host }, function() {
console.log('Connected via TCP');
// Include me, and nginx doesn't kill the socket
// setInterval(() => { client.write("ping") }, 5000);
});
client.on('end', function() {
console.log('Disconnected: ' + ((Date.now() - start)/1000/60) + " mins");
});
I've tried various directives in the nginx stream block, but nothing seems to help. Thanks in advance!

Varnish MISS on some URL's, but HITS on other

We are using Magento 2 with Varnish cache
We only get Varnish Cache HIT on very few /catalogsearch/result/ pages, and we really can not figure out why we don't get cache HIT on all /catalogsearch/result/ pages.
Please help us in the right direction :-)
Ex.
we always get HIT on this url
https://www.babygear.dk/catalogsearch/result/?q=bog
We always get MISS on a lot of other search queries
https://www.babygear.dk/catalogsearch/result/?q=black
https://www.babygear.dk/catalogsearch/result/?q=box
https://www.babygear.dk/catalogsearch/result/?q=box
Here is our varnish.vlc
# VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 5
vcl 4.0;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
.host = "127.0.0.1";
.port = "8080";
.first_byte_timeout = 600s;
}
acl purge {
"127.0.0.1";
}
sub vcl_recv {
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass shopping cart, checkout
if (req.url ~ "/checkout") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "/pub/health_check.php") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove all marketing get parameters to minimize the cache objects
if (req.url ~ "(\?|&)(gclid|ff|fp|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=") {
set req.url = regsuball(req.url, "(gclid|ff|fp|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
# Static files should not be cached by default
#return (pass);
# But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
unset req.http.Https;
unset req.http.X-Forwarded-Proto;
unset req.http.Cookie;
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
if (req.url ~ "/graphql") {
call process_graphql_headers;
}
}
sub process_graphql_headers {
if (req.http.Store) {
hash_data(req.http.Store);
}
if (req.http.Content-Currency) {
hash_data(req.http.Content-Currency);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses
if (beresp.status != 200) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 44400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} #else {
# unset resp.http.Age;
# }
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 3000000s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (miss);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
It's a bit tough to judge what's really going on, because there's both a Varnish cache in front of Magento and Cloudflare as the CDN.
A no-cache/no-store Cache-Control value
What I am seeing in general for your searches, is the following Cache-Control value:
cache-control: no-store, no-cache, must-revalidate, max-age=0
Based on this value, Varnish will decide not to cache. In the response headers of must of your search results you will see that Age: 0 is set. This means that Varnish doesn't hold the value in cache.
A cached search result that shouldn't be cacheable
However, weirdly enough https://www.babygear.dk/catalogsearch/result/?q=bog does have a an Age header with a value greater than zero:
age: 38948
This means it's been in cache for 38948 seconds. But really, this shouldn't be happening, because the page is not supposed to be cacheable.
Making search results cacheable:
Please make sure you have a Cache-Control header that allows caching, if you want to cache search results.
Example:
Cache-Control: public, s-maxage=3600
Debugging using Varnishlog
If you really want to know what happens in behind the scenes in Varnish, you can perform some debugging using the varnishlog binary.
You could run the following command to get debug output:
varnishlog -g request -q "ReqUrl eq '/catalogsearch/result/\?q=bog'"
This will print some very verbose logs on how Varnish treats the URL that causes the hit. You can add this output to your question, and I can try to examine what's going on.
FYI: I wrote a very detailed blog post about varnishlog a couple of years ago. Please go to https://feryn.eu/blog/varnishlog-measure-varnish-cache-performance/ to have a look, and to learn.
What's Cloudflare doing?
All that being said, I have no clue what the impact of Cloudflare is on the cacheability of the website. The varnishlog output will give us some insight, but if those results diverge from reality, Cloudflare is probably getting in our way.
Keep this in mind while debugging.
Inside vcl_recv Add
# Bypass search requests
if (req.url ~ "/catalogsearch") {
return (pass);
}
check .htaccess file for Cache-Control setting.

How to redirect requests to another host using ZAP?

I'm new to ZAP and I don't know much about it's js/ecma scripting.
Basically, I was trying to redirect request to another host.
Say an application that is connected to the ZAP proxy makes a request in a URL:
http://www.somesite.com/path/to/a/file
but I want to change the hostname in the URL to:
another.site.com
so it will actually request to: http://www.anothersite.com/path/to/a/file
Here's the code that I was trying to work but the URL remains unchanged in the request.
function proxyRequest(msg) {
// Debugging can be done using println like this
var uri = msg.getRequestHeader().getURI().toString()
var host = msg.getRequestHeader().getURI().getHost().toString()
print('proxyResponse called for url=' + uri)
if (host == 'download.qt.io') {
uri = uri.replace('download.qt.io/online/', 'mirrors.ocf.berkeley.edu/qt/online/')
msg.getRequestHeader().setHeader('Location', uri)
print('proxyRequest changed to url=' + uri)
}
if (host == 'ftp.jaist.ac.jp') {
uri = uri.replace('ftp.jaist.ac.jp/pub/qtproject/online/', 'mirrors.ocf.berkeley.edu/qt/online/')
msg.getRequestHeader().setHeader('Location', uri)
print('proxyRequest changed to url=' + uri)
}
if (host == 'qtproject.mirror.liquidtelecom.com') {
uri = uri.replace('qtproject.mirror.liquidtelecom.com/online/', 'mirrors.ocf.berkeley.edu/qt/online/')
msg.getRequestHeader().setHeader('Location', uri)
print('proxyRequest changed to url=' + uri)
}
return true
}
Option 1: Replacer Rule
Install the Replacer addon, from the marketplace:
Goto the Tools menu and select 'Replacer Options'.
Setup a rule as shown in the following screenshot.
Save/Okay as appropriate.
Now when your browse etc all your traffic will be redirected/rewritten.
Option 2: HttpSender Script
Create a new HttpSender script, similar to the following example:
function sendingRequest(msg, initiator, helper) {
var host = msg.getRequestHeader().getURI().getHost();
if (host.equals("www.somesite.com")) {
uri = msg.getRequestHeader().getURI();
uri.setEscapedAuthority("www.anothersite.com");
msg.getRequestHeader().setURI(uri);
}
return msg;
}
function responseReceived(msg, initiator, helper) {}
Option 3: Hosts File Entry
Goto a command prompt and nslookup www.somesite.com, note the IP address (w.x.y.z).
In your hosts file, add an entry associating the noted IP (w.x.y.z) with www.anothersite.com.
(You may need to restart ZAP/browsers for this change to take effect. On linux you'll likely need to sudo to edit the file, on Windows you'll need to edit it as an admin user.)
(Further details WRT editing your hosts file: https://www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/)

Google Cloud SQL or node-mysql answers a long time

We have this project using Polymer as the FrontEnd and Node.js as the API being consumed by Polymer, and our Node API replies a really long time especially if you just leave the page alone for like 10 minutes. Upon further investigation by inserting a DATE calculation in the MySQL Query, I found out that MySQL responds a Really long time. The query looks like this:
var query = dataStruct['formed_query'];
console.log(query);
var now = Date.now();
console.log("Getting Data for Foobar Query============ "+Date());
console.log(query);
GLOBAL.db_foobar.getConnection(function(err1, connection) {
////console.log("requesting MySQL connection");
if(err1==null)
{
connection.query(query,function(err,rows,fields){
console.log("response from MySQL Foobar Query============= "+Date());
console.log("MySQL response Foobar Query=========> "+(Date.now()-now)+" ms");
if(err==null)
{
//respond.respondJSON is just a res.json(msg); but I've added a similar calculation for response time starting from express router.route until res.json occurs
respond.respondJSON(dataJSON['resVal'],res,req);
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondError(msg,res,req);
}
connection.release();
});
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondJSON(msg,res,req);
respond.emailError(msg);
try{
connection.release();
}catch(err_release){
respond.LogInConsole(err_release);
respond.LogInConsole(err_release.stack);
}
}
});
}
When Chrome Developer tools reports a LONG PENDING time for the API, this happens to my log.
SELECT * FROM `foobar_table` LIMIT 0,20;
MySQL response Foobar Query=========> 10006 ms
I'm dumbfounded as to why this is happening.
We have our system hosted in Google Cloud Services. Our MySQL is a Google SQL service with an activation policy of ALWAYS. We've also set that our Node Server, which is a Google Compute Engine, to keep alive TCP4 connections via:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
I'm using mysql Pool from node-mysql
db_init.database = 'foobar_dbname';
db_init=ssl_set(db_init);
//GLOBAL.db_foobar = mysql.createConnection(db_init);
GLOBAL.db_foobar = mysql.createPool(db_init);
GLOBAL.db_foobar.on('connection', function (connection) {
setTimeout(tryForceRelease, mysqlForceTimeOut,connection);
});
db_init looks like this:
db_init = {
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL[![enter image description here][1]][1]',
password : '',
database : '',
supportBigNumbers: true,
connectionLimit:100
};
I'm also forcing to release connections if they're not released in 2 minutes, just to make sure it's released
function tryForceRelease(connection)
{
try{
//console.log("force releasing connection");
connection.release();
}catch(err){
//do nothing
//console.log("connection already released");
}
}
This is really wracking my brains out here. If anyone can help please do.
I'll post the same answer here as I posted in node-mysql pool experiences ETIMEDOUT.
The questions are sufficiently different that I'm not sure it's worth duping them.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

Lighttpd SSL Redirect - windows

I have a home server that I want to only serve pages via https but I have run into some issues. I have been serving non secure pages OK and could access the pages both on the local network and on the web (I'm using ddns.net and have all the port forwarding covered).
I have test certificates properly installed and at the moment the redirects work fantastically on the local network but NOT from the web. Below are the two redirects I have tested - both work locally but both failed to serve secure pages from the web.
NOTE: I use a non-standard port, i.e port 1080, however as mentioned above, non-secure access is all OK so the port forwarding from my gateway router to the server is (at least I think!) fine. Also, I can only browse to the server when I concatenate the port number to the IP / name, i.e localhost:1080 or 192.168.1.1:1080 (which is fine by me) and thus the redirect filters.
In this instance, I can access the pages bot securely and insecurely from the local network but can NOT access securely from the web.
$HTTP["scheme"] == "http" {
$HTTP["host"] =~ "^(.*):1080" {
url.redirect = (".*" => "https://%1$0")
}
}
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = Var.Doo + "/server.pem"
ssl.ca-file = Var.Doo + "/ca.pem"
setenv.add-environment = ( "HTTPS" => "on" )
}
After some web research, I added a condition to the redirects to be able to handle the non-port concatenated URL, however I can neither access the pages securely nor insecurely from the web (locally still works though).
$HTTP["scheme"] == "http" {
$HTTP["host"] =~ "^(.*):1080" {
url.redirect = (".*" => "https://%1$0")
}
else $HTTP["host"] =~ ".*" {
url.redirect = (".*" => "https://%0$0")
}
}
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = Var.Doo + "/server.pem"
ssl.ca-file = Var.Doo + "/ca.pem"
setenv.add-environment = ( "HTTPS" => "on" )
}
EDIT: OK, 20 views & counting and no suggestion of an answer yet ...
I know I stated above that I believe the port forwarding is all good, but now I am having second thoughts on that. Any pointers either way?
OK, I spent some more time looking at this and managed to resolve the issue, which was two-fold.
As latterly suspected, my initial assumption that the port forwarding was OK turned out to be incorrect as I had not forwarded the secure port (which lighttpd forcefully defaults to), i.e port 443. Thus the first part of the solution was completing the port forwarding on my gateway router to include that route.
The second part of the solution is a textually minor change to the redirect code in the configuration file to filter on the ports rather than the protocol (the former code may also work but have not tested it). Here's the changed and tested code:
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = Var.Doo + "/server.pem"
ssl.ca-file = Var.Doo + "/ca.pem"
setenv.add-environment = ( "HTTPS" => "on" )
}
else $SERVER["socket"] == ":1080" {
$HTTP["host"] =~ "([^:/]+)" {
url.redirect = ( "^/(.*)" => "https://%1:443/$1" )
}
}