Is there a way to set HAProxy to listen on a specific port only if the hostname from the IP used matches a certain criteria?
The distinctin is important: My server has multiple IPs, which match a domain (www1.xxxx.com, www2.xxxx.com, etc).
I want to open port YYYY only if the domain used to connect to HAProxy is www.xxxx.com. If testing via www1.xxxx.com, that port would be refused.
Note that HAProxy is used in TCP mode, not HTTP.
Is this possible?
Thank you.
May be something like this?
listen port_3306
bind :3306
mode tcp
acl my-ip src 216.58.204.78
tcp-request content accept if my-ip
server my-test-comms localhost:3306 check
Attaching the documentation link, may be you can play around with those settings.
Hope this helps.
I have two instances of HAProxy. Both instances have stats enabled and are working fine.
I am trying to combine the stats from both instances into one so that I can use a single HAProxy to view the front/backends stats. I've tried to have the stats listener on the same port for both haproxy instances but this isn't working. I've tried using the sockets interface but this only reports on one of the interfaces as well.
Any ideas?
My one haproxy config file looks like this:
global
daemon
maxconn 256
log 127.0.0.1 local0 debug
log-tag haproxy
stats socket /tmp/haproxy
defaults
log global
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8000
default_backend servers
log global
option httplog clf
backend servers
balance roundrobin
server ws8001 localhost:8001
server ws8002 localhost:8002
log global
listen admin
bind *:7000
stats enable
stats uri /
The other haproxy config is the same except the front/backend server IPs are different.
While perhaps not an exact answer to this specific question, I've seen this kind of question enough that I think it deserves to be answered.
When running with nbproc greater than 1, the Stack Exchange guys have a unique solution. They have a listen section that receives SSL traffic and then uses send-proxy to 127.0.0.1:80. They then have a frontend that binds to 127.0.0.1:80 like this: bind 127.0.0.1:80 accept-proxy. Inside of that frontend they then bind that frontend, e.g. bind-process 1 and in the globals section the do the following:
global
stats socket /var/run/haproxy-t1.stat level admin
stats bind-process 1
The advantage of this is that they get multiple cores for SSL offloading and then a single core dedicated to load balancing traffic. All traffic ultimately flows through this frontend and therefore they can accurately measure stats from that frontend.
This can't work. Haproxy keeps stats separated in each process. It has no capabilities to combine stats of multiple processes.
That said, you are of course free to use external monitoring tools like (munin, graphite or even nagios) which can aggregate the CSV data from multiple stats sockets and display them in unified graphs. These tools are however out-of-scope of core haproxy.
I'm reading the Beej guide to network programming, and I came across this
int getaddrinfo(const char *node, // e.g. "www.example.com" or IP
const char *service, // e.g. "http" or port number
const struct addrinfo *hints,
struct addrinfo **res);
You give this function three input parameters, and it gives you a
pointer to a linked-list, res, of results.
The node parameter is the host name to connect to, or an IP address.
Next is the parameter service, which can be a port number, like "80",
or the name of a particular service (found in The IANA Port List or
the /etc/services file on your Unix machine) like "http" or "ftp" or
"telnet" or "smtp" or whatever.
Are unix ports and protocols the same thing? For example https is the same as port 443, and http is port 80?
Quick
Ports and protocols are different things, but usually protocol has default port, so default port for http protocol web server is 80.
Explanation
Port is tcp/ip level entity, this is endpoint where binary network requests are sent.
Protocol is application level entity, it is used as language to communicate between client and server.
Basically you can speak any protocol over any port (just make sure that server and client use the same ones). So you can speak http over 12345 port and vice versa use 80 port to speak ftp.
For example when you type in your browser stackoverflow.com - your browser first add http:// before host name and then add :80 after - as default port for http, so actual URL which is accessesd - http://stackoverflow.com:80/, if you type https://stackoverflow.com - browser automatically adds :443 and so on.
But if you try to open https://stackoverflow.com:80/ you will get error - as it is wrong protocol for this port.
More - you can configure your own server to use some different ports, and then you will have to indicate protocol AND port for each request, for example: http://example.com:12345/ or https://example.com:54321/
Services, such as "ftp" and "http" have default ports. In this case, 443 and 80. So they do correspond to ports. I would, however, not call them "the same thing". You could, for example, perform http protocol over a different port if you wish. When you type http://something:8080 in your browser, for example, you are doing http over port 8080. In the case of getaddrinfo, it's allowing you to use the name of the service and it will internally use the default port number as determined by a configuration file (e.g., /etc/services on some Unix systems).
No. A port is a number, a protocol is a specification. They are associated, for convenience, but not identical.
And ports are part of TCP/IP, not Unix. The entire question is basically just a category mistake.
Can two applications on the same machine bind to the same port and IP address? Taking it a step further, can one app listen to requests coming from a certain IP and the other to another remote IP?
I know I can have one application that starts off two threads (or forks) to have similar behavior, but can two applications that have nothing in common do the same?
The answer differs depending on what OS is being considered. In general though:
For TCP, no. You can only have one application listening on the same port at one time. Now if you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP using the same port number.
For UDP (Multicasts), multiple applications can subscribe to the same port.
Edit: Since Linux Kernel 3.9 and later, support for multiple applications listening to the same port was added using the SO_REUSEPORT option. More information is available at this lwn.net article.
Yes (for TCP) you can have two programs listen on the same socket, if the programs are designed to do so. When the socket is created by the first program, make sure the SO_REUSEADDR option is set on the socket before you bind(). However, this may not be what you want. What this does is an incoming TCP connection will be directed to one of the programs, not both, so it does not duplicate the connection, it just allows two programs to service the incoming request. For example, web servers will have multiple processes all listening on port 80, and the O/S sends a new connection to the process that is ready to accept new connections.
SO_REUSEADDR
Allows other sockets to bind() to this port, unless there is an active listening socket bound to the port already. This enables you to get around those "Address already in use" error messages when you try to restart your server after a crash.
Yes.
Multiple listening TCP sockets, all bound to the same port, can co-exist, provided they are all bound to different local IP addresses. Clients can connect to whichever one they need to. This excludes 0.0.0.0 (INADDR_ANY).
Multiple accepted sockets can co-exist, all accepted from the same listening socket, all showing the same local port number as the listening socket.
Multiple UDP sockets all bound to the same port can all co-exist provided either the same condition as at (1) or they have all had the SO_REUSEADDR option set before binding.
TCP ports and UDP ports occupy different namespaces, so the use of a port for TCP does not preclude its use for UDP, and vice versa.
Reference: Stevens & Wright, TCP/IP Illustrated, Volume II.
In principle, no.
It's not written in stone; but it's the way all APIs are written: the app opens a port, gets a handle to it, and the OS notifies it (via that handle) when a client connection (or a packet in UDP case) arrives.
If the OS allowed two apps to open the same port, how would it know which one to notify?
But... there are ways around it:
As Jed noted, you could write a 'master' process, which would be the only one that really listens on the port and notifies others, using any logic it wants to separate client requests.
On Linux and BSD (at least) you can set up 'remapping' rules that redirect packets from the 'visible' port to different ones (where the apps are listening), according to any network related criteria (maybe network of origin, or some simple forms of load balancing).
Yes Definitely. As far as i remember From kernel version 3.9 (Not sure on the version) onwards support for the SO_REUSEPORT was introduced. SO_RESUEPORT allows binding to the exact same port and address, As long as the first server sets this option before binding its socket.
It works for both TCP and UDP. Refer to the link for more details: SO_REUSEPORT
No. Only one application can bind to a port at a time, and behavior if the bind is forced is indeterminate.
With multicast sockets -- which sound like nowhere near what you want -- more than one application can bind to a port as long as SO_REUSEADDR is set in each socket's options.
You could accomplish this by writing a "master" process, which accepts and processes all connections, then hands them off to your two applications who need to listen on the same port. This is the approach that Web servers and such take, since many processes need to listen to 80.
Beyond this, we're getting into specifics -- you tagged both TCP and UDP, which is it? Also, what platform?
You can have one application listening on one port for one network interface. Therefore you could have:
httpd listening on remotely accessible interface, e.g. 192.168.1.1:80
another daemon listening on 127.0.0.1:80
Sample use case could be to use httpd as a load balancer or a proxy.
When you create a TCP connection, you ask to connect to a specific TCP address, which is a combination of an IP address (v4 or v6, depending on the protocol you're using) and a port.
When a server listens for connections, it can inform the kernel that it would like to listen to a specific IP address and port, i.e., one TCP address, or on the same port on each of the host's IP addresses (usually specified with IP address 0.0.0.0), which is effectively listening on a lot of different "TCP addresses" (e.g., 192.168.1.10:8000, 127.0.0.1:8000, etc.)
No, you can't have two applications listening on the same "TCP address," because when a message comes in, how would the kernel know to which application to give the message?
However, you in most operating systems you can set up several IP addresses on a single interface (e.g., if you have 192.168.1.10 on an interface, you could also set up 192.168.1.11, if nobody else on the network is using it), and in those cases you could have separate applications listening on port 8000 on each of those two IP addresses.
Just to share what #jnewton mentioned.
I started an nginx and an embedded tomcat process on my mac. I can see both process runninng at 8080.
LT<XXXX>-MAC:~ b0<XXX>$ sudo netstat -anp tcp | grep LISTEN
tcp46 0 0 *.8080 *.* LISTEN
tcp4 0 0 *.8080 *.* LISTEN
Another way is use a program listening in one port that analyses the kind of traffic (ssh, https, etc) it redirects internally to another port on which the "real" service is listening.
For example, for Linux, sslh: https://github.com/yrutschle/sslh
If at least one of the remote IPs is already known, static and dedicated to talk only to one of your apps, you may use iptables rule (table nat, chain PREROUTING) to redirect incomming traffic from this address to "shared" local port to any other port where the appropriate application actually listen.
Yes.
From this article:
https://lwn.net/Articles/542629/
The new socket option allows multiple sockets on the same host to bind to the same port
Yes and no. Only one application can actively listen on a port. But that application can bequeath its connection to another process. So you could have multiple processes working on the same port.
You can make two applications listen for the same port on the same network interface.
There can only be one listening socket for the specified network interface and port, but that socket can be shared between several applications.
If you have a listening socket in an application process and you fork that process, the socket will be inherited, so technically there will be now two processes listening the same port.
I have tried the following, with socat:
socat TCP-L:8080,fork,reuseaddr -
And even though I have not made a connection to the socket, I cannot listen twice on the same port, in spite of the reuseaddr option.
I get this message (which I expected before):
2016/02/23 09:56:49 socat[2667] E bind(5, {AF=2 0.0.0.0:8080}, 16): Address already in use
If by applications you mean multiple processes then yes but generally NO.
For example Apache server runs multiple processes on same port (generally 80).It's done by designating one of the process to actually bind to the port and then use that process to do handovers to various processes which are accepting connections.
Short answer:
Going by the answer given here. You can have two applications listening on the same IP address, and port number, so long one of the port is a UDP port, while other is a TCP port.
Explanation:
The concept of port is relevant on the transport layer of the TCP/IP stack, thus as long as you are using different transport layer protocols of the stack, you can have multiple processes listening on the same <ip-address>:<port> combination.
One doubt that people have is if two applications are running on the same <ip-address>:<port> combination, how will a client running on a remote machine distinguish between the two? If you look at the IP layer packet header (https://en.wikipedia.org/wiki/IPv4#Header), you will see that bits 72 to 79 are used for defining protocol, this is how the distinction can be made.
If however you want to have two applications on same TCP <ip-address>:<port> combination, then the answer is no (An interesting exercise will be launch two VMs, give them same IP address, but different MAC addresses, and see what happens - you will notice that some times VM1 will get packets, and other times VM2 will get packets - depending on ARP cache refresh).
I feel that by making two applications run on the same <op-address>:<port> you want to achieve some kind of load balancing. For this you can run the applications on different ports, and write IP table rules to bifurcate the traffic between them.
Also see #user6169806's answer.