I've started rate limiting my API using HAProxy, but my biggest problem is not so much the rate of requests, but when multi-threaded requests overlap.
Even within my legal per-second limits, big problems are occurring when clients don't wait for a response before issuing another request.
Is it possible (say, per IP address) to queue requests and pass them one at at time to the back end for sequential processing?
Here is a possible solution to enforce one connection at a time per src IP.
You need to put the following HAProxy conf in the corresponding frontend:
frontend fe_main
mode http
stick-table type ip size 1m store conn_cur
tcp-request connection track-sc0 src
tcp-request connection reject if { src_conn_cur gt 1 }
This will create a stick table that stores concurrent connection counts per source IP. Then rejects new connections if there is already one established from the same src IP.
Browsers imitating multiple connections to your API or clients behind a NAT will not be able to efficiently use you API.
Related
I'm trying to configure the ListenUDP or ListenTCP processors to get input from multiple, but very specific IP's. I'm trying to find out if IP ranges can be used instead of a single IP, this way all my palo altos will go to one processor, and so on.
ListenTCP & ListenUDP do not filter incoming traffic.
You can choose which network interface is used to listen for incoming traffic by setting Local Network Interface - so you can apply normal filtering techniques such as iptables or a firewall to filter what traffic is allowed to reach that network interface.
In theory, recieved messages do write an attribute tcp.sender or udp.sender to each FlowFile. So you could technically filter AFTER the ListenTCP by comparing the value of this Attribute and dropping messages that are not valid...but this is a lot less efficient than filtering network traffic outside of NiFi.
In a two node scenario, using roundrobin, I want haproxy to dispense two requests to each node before switching to the next node.
I have a messaging application, which makes one request for getting a messageID, then the next for sending the message.
If i use a standard roundrobin algorithm on two backend servers, this leads to one server only getting the messageID requests, and the other doing all the message sending.
This is not really balanced, as providing messageIDs is a nobrainer to the server, and handling the messages, which can be up to a few hundret MBs, is all done by the other node.
I had a look at weighted roundrobin, but if seems not to work out, when using a weight of 2 for both servers, as the weights seem to get calculated relatively to each other.
I'd be glad for any hint, how to achieve haproxy switching the backend nodes after sending two requests, instead of one.
this is my current configuration, which still leads to a claer one here one there round robin pattern:
### frontend XTA Entry TLS/CA
frontend GMM_XTA_Entry_TLS_CA
mode tcp
bind 10.200.0.20:8444
default_backend GMM_XTA_Entrypoint_TLS_CA
### backend XTA Entry TLS/CA
backend GMM_XTA_Entrypoint_TLS_CA
mode tcp
server GMMAPPLB1-XTA-CA 10.200.0.21:8444 check port 8444 inter 1s rise 2 fall 3 weight 2
server GMMAPPLB2-XTA-CA 10.200.0.22:8444 check port 8444 inter 1s rise 2 fall 3 weight 2
well, like stated, I would need a "two requests here, two requests there" round robin pattern, but it keeps doing "one here, one there".
Glad for any hint, cheers,
Arend
To get the behavior you want where requests go to a server 2 at a time, you can add an extra consecutive server line for each backend, like so:
backend GMM_XTA_Entrypoint_TLS_CA
balance roundrobin
mode tcp
server GMMAPPLB1-XTA-CA_1 10.200.0.21:8444 check port 8444 inter 1s rise 2 fall 3
server GMMAPPLB1-XTA-CA_2 10.200.0.21:8444 track GMMAPPLB1-XTA-CA_1
server GMMAPPLB2-XTA-CA_1 10.200.0.22:8444 check port 8444 inter 1s rise 2 fall 3
server GMMAPPLB2-XTA-CA_2 10.200.0.22:8444 track GMMAPPLB2-XTA-CA_1
However, if you can use HAProxy 1.9 or above, you can also use the balance random option which should randomly distribute requests evenly across your servers. I think this may solve the balancing problem you stated above more directly. Also, using balance random will still balance your requests randomly if the type of requests change.
the proposed answer using 4 server entries in the backend did the job.
I am not sure, if it is the most elegant solution, but it did help me understanding the usage of backends a bit more, again thanks for that.
I am acting as server which receives multiple requests from client in socket and handles in a thread.
Should i set any parameter in TCP level to set maximum number of requests a connection can handle simultaneously?
because in my server side ,if processing the request is slow i observe that other requests are queued up (client says request has been sent but i receive it late)
Kindly guide me
If it takes a long time to do the work and you want to handle multiple connections simultaneously, you have to change how you do things.
If you are actively using a lot of CPU during processing a long request, you'll need multiple threads. That's the only way to actually get more CPU time / second -- assuming you have multiple cores available.
If you are waiting on things like file IO, then you can instead use asynchronous processing to handle the requests on a single thread, but just handle a little piece at a time.
Setting a maximum number of TCP connections won't help you handle more processes more quickly. It will just reject connections and not even allow a first-come first-served type of behavior - it will just be random if a specific client ever gets through or not.
I was asked to build a site , and one of the co-developer told me That I would need to include the keep-alive header.
Well I read alot about it and still I have questions.
msdn ->
The open connection improves performance when a client makes multiple
requests for Web page content, because the server can return the
content for each request more quickly. Otherwise, the server has to
open a new connection for every request
Looking at
When The IIS (F) sends keep alive header (or user sends keep-alive) , does it mean that (E,C,B) save a connection which is only for my session ?
Where does this info is kept ( "this connection belongs to "Royi") ?
Does it mean that no one else can use that connection
If so - does it mean that keep alive-header - reduce the number of overlapped connection users ?
if so , for how long does the connection is saved to me ? (in other words , if I set keep alive- "keep" till when?)
p.s. for those who interested :
clicking this sample page will return keep alive header
Where is this info kept ("this connection is between computer A and server F")?
A TCP connection is recognized by source IP and port and destination IP and port. Your OS, all intermediate session-aware devices and the server's OS will recognize the connection by this.
HTTP works with request-response: client connects to server, performs a request and gets a response. Without keep-alive, the connection to an HTTP server is closed after each response. With HTTP keep-alive you keep the underlying TCP connection open until certain criteria are met.
This allows for multiple request-response pairs over a single TCP connection, eliminating some of TCP's relatively slow connection startup.
When The IIS (F) sends keep alive header (or user sends keep-alive) , does it mean that (E,C,B) save a connection
No. Routers don't need to remember sessions. In fact, multiple TCP packets belonging to same TCP session need not all go through same routers - that is for TCP to manage. Routers just choose the best IP path and forward packets. Keep-alive is only for client, server and any other intermediate session-aware devices.
which is only for my session ?
Does it mean that no one else can use that connection
That is the intention of TCP connections: it is an end-to-end connection intended for only those two parties.
If so - does it mean that keep alive-header - reduce the number of overlapped connection users ?
Define "overlapped connections". See HTTP persistent connection for some advantages and disadvantages, such as:
Lower CPU and memory usage (because fewer connections are open simultaneously).
Enables HTTP pipelining of requests and responses.
Reduced network congestion (fewer TCP connections).
Reduced latency in subsequent requests (no handshaking).
if so , for how long does the connection is saved to me ? (in other words , if I set keep alive- "keep" till when?)
An typical keep-alive response looks like this:
Keep-Alive: timeout=15, max=100
See Hypertext Transfer Protocol (HTTP) Keep-Alive Header for example (a draft for HTTP/2 where the keep-alive header is explained in greater detail than both 2616 and 2086):
A host sets the value of the timeout parameter to the time that the host will allows an idle connection to remain open before it is closed. A connection is idle if no data is sent or received by a host.
The max parameter indicates the maximum number of requests that a client will make, or that a server will allow to be made on the persistent connection. Once the specified number of requests and responses have been sent, the host that included the parameter could close the connection.
However, the server is free to close the connection after an arbitrary time or number of requests (just as long as it returns the response to the current request). How this is implemented depends on your HTTP server.
I am an experienced socket-level programmer in C++, but I do not understand what happens at the IP network level when a socket connection is left open (vs. being closed by calling the close function on the socket from within code).
I have studied the IP header and tried to understand if leaving a socket open has any implications at the IP level.
At the TCP level, leaving a socket open could make sense to me, because perhaps that means the "sequence number" field in the TCP header continues to increment. However, that would be a purely endpoint-based implementation, and therefore could not cut down on transit time for TCP packets. It is my understanding that leaving a connection open generally means that transit time between endpoints across the internet is decreased for packets.
The question is, does it mean anything at the IP level to leave a socket connection open?
The best guess I have is that if a socket connection remains open, that intervening gateways along the complete IP network path will attempt to leave an entry in their mapping table so that the next hop can be executed immediately, without needing to do a broadcast to all connected gateways in order to determine the next hop.
(Perhaps the overhead of DNS lookup is also avoided in this fashion.)
Am I correct in guessing that "leaving a connection open" corresponds to map entries remaining in place on intermediate IP gateways (which speeds up packet transfer)?
Direct answer: No.
Your question suggests that you don't fully understand the purpose of TCP, which is to establish a data stream between two hosts. Keeping that in mind, the purpose of leaving a connection open should be obvious: if you close the connection, the stream will end.
The status of a TCP connection is not visible on the IP level; it's only of relevance to TCP. With the exception of NAT gateways, intermediate hosts do not generally keep track of the status of TCP connections passing through them. (In many cases, it'd be impossible for them to do so -- large routers have far more connections running through them than they could possibly track.)
The best guess I have is that if a socket connection remains open, that intervening gateways along the complete IP network path will attempt to leave an entry in their mapping table so that the next hop can be executed immediately, without needing to do a broadcast to all connected gateways in order to determine the next hop.
This guess is incorrect. A router will have some sort of algorithm for picking a route based on the destination IP, based on a set of routing tables it keeps internally. Read up on BGP for details on how this is determined on large routers; on smaller routers, the routing table is typically defined by the administrator.
First of all, let's clear up a misconception:
that intervening gateways along the complete IP network path will attempt to leave an entry in their mapping table so that the next hop can be executed immediately, without needing to do a broadcast to all connected gateways in order to determine the next hop.
Routers never "broadcast to all connected gateways" in order to determine the next hop. If a packet arrives and the router does not already know how to route it, the packet is simply dropped (possibly with an ICMP error message being sent back to the source). The job of the routing protocols that run on routers is to prepopulate the router's routing table with routes learned from peers so that they are then prepared to receive packets and route them.
Also, "the complete IP network path" is not well-defined. The network path can change at any time as links fail on the network or new links become available. It can even change from one packet to the next in the absence of routing changes due to load balancing.
Back to your question: no, whether or not a socket is closed has no impact on IP. IP is stateless in the sense that every packet is self-contained and routed independently.
Whether or not a socket is closed does make a difference to TCP, but, as you note, that concerns only the two nodes at the endpoints of the connection.
The impact of "leaving a connection open" on speed, such that it is, is that establishing a connection in TCP requires a round-trip. But more to the point, a connection also has semantic meaning to most protocols running on TCP. Two bits of data sent on the same connection are related in a way that two bits of data sent on different connections are not.