mitmproxy as a reverse proxy - force SSL? - mitmproxy

Say you have some toy project that should get SSL support. Simply set up mitmproxy as a reverse proxy on port 443 and you’re done (mitmdump -p 443 --mode reverse:http://localhost:80/). Mitmproxy auto-detects TLS traffic and intercepts it dynamically.
https://docs.mitmproxy.org/stable/concepts-modes/
Can you have mitmdump enforce SSL? Or verify that the client really uses SSL?

You can write a mitmproxy addon that checks just that:
from mitmproxy import http
def request(flow):
if not flow.client_conn.tls_established:
flow.request.response = http.HTTPResponse.make(400)

Related

Transparent mode redirects to host itself

I'm new to mitmproxy and and I'm probably doing something wrong.
I'm running Mitmproxy in transparent mode on Ubuntu and followed the steps on https://docs.mitmproxy.org/stable/howto-transparent/. Its IP is 10.50.10.117.
I then added a line to /etc/hosts in my client machine (10.50.10.116) that points to the mitmproxy server for a test domain (example.com). So this is only on the client machine, and not on the machine running mitmproxy.
Then when I execute 'curl http://example.com' on the client machine, I see the request get to mitmproxy but it errors out with:
10.50.10.116:60936: GET http://example.com/
Host: example.com
User-Agent: curl/7.68.0
Accept: */*
<< Server connection to ('10.50.10.117', 80) failed: Error connecting to "10.50.10.117": [Errno 111] Connection refused
So mitmproxy is trying to connect to its own host on port 80. Why is it not proxying the request to the real example.com?
Thanks.
Henry
https://docs.mitmproxy.org/stable/concepts-modes/#transparent-proxy has an illustration that shows your problem: The TCP packet's destination IP address is mitmproxy and not the actual target. Transparent mode expects Layer 2 redirection.
It looks like you want to run mitmproxy as a reverse proxy. Alternatively, starting with mitmproxy 7 (currently only available as development snapshots, but I encourage you to try them out), you can run it in regular mode and it will pick up host headers for the target information.

Is it possible to redirect a TCP connection based on host name?

What I want to be able to do is connect to a postgres server like this:
psql -h postgres-a.example.com -p 9000
That connection should be received by a proxy server (like nginx or haproxy) and it will be redirected to database A because of host name postgres-a.example.com. If I use postgres-b.example.com and the same port, it should go to database B.
I have been researching this, but I am still not 100% sure of how this would work. I read that the only way to redirect a TCP connection (psql) based on host name is using the SNI header. But I still don't understand if we will need a SSL certificate for this, or if we will need to use https://postgres-a.example.com (That doesn't make any sense to me). How it will work?
Can someone help me understand this?
Yes. You will need a certificate for TLS/SSL and you can route the requests based on req.ssl_sni to the proper backend.
I'm not sure if psql uses SNI but i think this have you check.
frontend public_ssl
bind :::9000 v4v6 crt /usr/local/etc/haproxy-certs
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
use-server postgres-a if { req.ssl_sni -i postgres-a.example.com }
use-server postgres-b if { req.ssl_sni -i postgres-b.example.com }
backend postgres-a
server postgres-a FURTHER SERVER PARAMS
backend postgres-b
server postgres-b FURTHER SERVER PARAMS
I have created a blog post with a picture for a more detailed description.
https://www.me2digital.com/blog/2019/05/haproxy-sni-routing/

Proxy requests to backend using h2c

Re-asking the question from HA-Proxy discourse site here in the hopes of getting more eyes on it.
I am using HA-Proxy version 1.9.4 2019/02/06 for proxying HTTP traffic to a h2c backend. I am however seeing HA-Proxy set the :scheme to https (and from as far as I can tell uses SSL in the request) when proxying the request. When I hit the backend directly, the :scheme is set to http and the request is non-SSL as expected. I have verified this HA-Proxy behavior using wireshark.
Any suggestions on what I should change in my configuration so that I can set to make sure that the :scheme gets set to http while proxying the request to the backend?
I am using curl 7.54.0 to make requests:
$ curl http://localhost:9090
where HA-Proxy is listening on port 9090.
My HA-Proxy config file:
global
maxconn 4096
daemon
defaults
log global
option http-use-htx
timeout connect 60s
timeout client 60s
timeout server 60s
frontend waiter
mode http
bind *:9090
default_backend local_node
backend local_node
mode http
server localhost localhost:8080 proto h2
It's not supported yet. The client=>haproxy connection can be HTTP/2, the haproxy=>server connection cannot.
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#1.1
HTTP/2 is only supported for incoming connections, not on connections
going to servers.
Just add proto h2 to the server definition.
Cite from example:
server server1 192.168.1.13:80 proto h2
It's an experimental feature of haproxy-1.9, you must enable option http-use-htx to use it.
option http-use-htx is enabled by default since haproxy-2.0-dev3.
This was reported as an issue in haproxy github and has been fixed in version 2.0.

Is there any public PeerServer over HTTPS?

I have a simple web application using peerjs here: https://github.com/chakradarraju/bingo. I was planning to use github.io to put up a demo, and github.io will be served only in HTTPS, the default PeerServer that is used by the peerjs library doesn't support HTTPS.
Is there any public HTTPS PeerServer that I can use?
The simple answer to this is no. It's unfortunate that the browsers recently disallowed http for any address except localhost.
One way to do it is set up an SSH port forward so that you can fool the browser into thinking it is talking to localhost. Ok for a demo, but not production. Here is some info (from https://www.ssh.com/ssh/tunneling/example)
In OpenSSH, remote port forwardings are specified using the -R option.
For example:
ssh -R 8080:localhost:80 public.example.com
This allows anyone on the remote server to connect to TCP port 8080 on
the remote server. The connection will then be tunneled back to the
client host, and the client then makes a TCP connection to port 80 on
localhost. Any other host name or IP address could be used instead of
localhost to specify the host to connect to.
Alternatively if you have your own web server, you can use Let's encrypt: https://letsencrypt.org/ to make it run https without needing to buy an SSL cert. Their tools are so good it's a five minute exersize to get https on your server.
Give a try to www
Is can create automatically valid certificates by using letsencrypt or self-signed.
https://go-www.com/post/how-it-works/
Usage of ./www:
-p port
Listen on port (default 8000)
-q quiet
quiet mode
-r root
Document root path (default ".")
-s your-domain.tld
https://your-domain.tld if "localhost", port can be other than 443
This issue can be resolved by setting options.secure to true as mentioned here.

All traffic forwarding using SSH tunnel in FreeBSD

I connect to my remote VPS like that:
ssh -f -C2qTnN -D 1080 username#xxx.xxx.xxx.xxx
Then setup Firefox proxy setting to SOCKS5 and 127.0.0.1:1080. That's work.
Now I try to redirect all traffic from my FreeBSD to localhost:1080, but I have no idea. Can you help?
If you want to redirect all traffic, do not use SOCKS5 proxy, but rather use -w option in ssh, which creates something like VPN connection and its own TUN device, which is more suitable for tunnelling system-wide traffic.
There are many examples on the internet, for example here. But this is really advanced use case.