How do game-proxys minify network latency in their infrastructure? [closed] - sockets

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
in South America, many gamers use something called a proxy service which takes their network connection, route's it through their own infrastructure and then exit close to the game server location.. E.g. they want to control that the TCP Traffic does not cross the USA for latency reasons.. So, how could they manipulate the path taken by a TCP connection ?
a) Do they just open up TCP conections in low traffic times (e.g. 4 in the morning) and then keep those for the rest of the day ?
b) Do they keep trying to open up TCP connections UNTIL they get lower latency one and then switch their internal traffic to that connection ?
c) Is the only thing they can do to minify TCP latency over long distances to rent private peerings or choose a hoster with good ones?
d) Could sending UDP packets over such distances reduce latency IF and only if you skip out packet loss (e.g. by sending the traffic redudant/multiple packets each) ?
It all boils down to the question whether u can control somehow what path a TCP connection takes or if you cant.
This talk is all about the networking part which is NOT about the endusers computer (Leantrix/TCP Optimizations) or the game servers.. They can somehow gain additional latency savings and im curious how they do it.
Thank you for the great year I've been with SO for now - its been a pleasure to talk to experts about stuff.

If you are referring to thins like Battleping and the likes, here's what someone has written in a forum that seems to make sense, i suppose the same holds true for South America. The relevant info is "SSH tunnel"
The advent of proxy tunnels came from the demand of Oceanic WoW
players. Incase you don't know, the backbone connecting Oceania to
America is a huge piece of shit and once you leave Australia/New
Zealand packets gain an extra 200ms because gaming packets get shaped
leaving our country, and then they get shaped going into America.
Generally you can ping about 200~ to US Servers, but in real-time the
game data will end up getting prioritized to hell and back and you'll
have a latency of around 500.
The way Lowerping, Battleping and Smoothping etc all work are by
establishing an SSH tunnel to a proxy in America and sending the
SC2/WoW data through it. SSH traffic has much higher priority than
gaming traffic does, so instead of being delayed, the data flies
through. Afaik, it also doesn't get shaped as incoming traffic from
the Blizzard serverside, because they're originating from a proxy
inside of the US.
Feel free to correct me, I might be wrong on some things, but that's
what I've picked up from using the very first tunneling service
(Lowerping) since it came out

As per my knowledge proxy servers do not speed up your connection. They usually send your data through a longer path and the receiver will see the data packet as a packet originated from the proxy server.
The computer that sends data can not determine the path that it takes. It can only determine the end points. When connected to a proxy. The end point of the 1st trip is the proxy and then proxy retransmits the data packet to the second destination.Proxy is a special type of server configured for this retransmission.
Lets see the internet as a spider web. Then each joint is a router. These routers maintain a table called the 'routing table'. Routing table has information about where to foreword the data packets according to their destination. This routing table is updated automatically to send the packets in the shortest path.
So you see if we do not put any interference the data packets will go in the shortest path.
Now the exception,
If the proxy service provider has a different private network connection from the proxy location to game server location (some thing like a privet highway with no traffic) the data packets can be delivered quickly. But this is a highly unlikely thing because no one will draw their privet wires around the world instead of already existing internet backbones.
Lets say
A - Origin
B - Destination
C - Proxy
Then,
Normal way packets go
A -> B (Quickest path determined by the routers)
With Proxy packets go,
A -> C -> B (Usually a longer path)
With a proxy and a high-speed privet network C-D (This is a highly unlikely scenario no one have such things.)
A -> C --(less traffic)--> D -> B (Can have a speed gain)
Some other ways of increasing the speed of the connection.
You can use UDP instead of using TCP. TCP usually has some error correcting features. All the routers double check the data to be correct. This slows down the data transmission. With UDP this error checking happens in a minimal level. So if you use UDP the transmitted data might occasionally has errors but will transmit quickly.
The stranded protocols that transmit data ie.HTTP has many fields other than the actual data. These are checksums, browser information, OS information etc. If you make a different protocol by removing these data, the amount of data to be transmitted become small. This will also speedup the communication.

Related

TCP is on top of IP, what does this mean? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I always hear about the layers of internet and i vaguely understand this. But, what confuse me most is that transport layer (including TCP protocol) lies on top of the internet layer(including IP protocol)..
What does this mean? For one who has a foggy understanding of the internet mechanism (I'm not a CS student or something I am just a hobby programmer)
The picture I have about the internet is that the network card sends/receives signals (packets) from the internet through wired connection / wifi then the OS using socket API sends/receives these packets acting as a layer between the hardware and the application which in turn uses some high-level protocol such as HTTP to interpret the data transferred - these protocol usually provided by languages e.g. python or java
.. I guess then that IP and TCP protocol are used at the level of the socket API? but I need more details ? I hope the explanation be in terms of coding/programming/implementation because abstractions used in this area confuse me.
Thank you , and sorry for my bad English
This is part of a layered solution to solve networking. Each layer has its own functionality:
IP (Internet Protocol) is in charge of delivering a packet (or datagram) from one interface, in one machine, with an IP address assigned to another interface in the same or other machine (node). Both nodes can be in the same LAN or different LAN connected through different paths (LAN's and routers). Basically it will make the packet get from source IP to destination IP. It provides a best-effort services, it doesn't assure the IP packet is going to arrive, it can be lost in the middle.
Above layer 3 or IP in the so-called TCP/IP stack, there is the transport layer. Its main functionality is to multiplex the lower layer (IP) service (take a packet from src to dst) among different applications. This is why in all transport layer protocols there is the concept of port or more generically Transport Service Access Point (TSAP). UDP, TCP, SCTP do that. UDP provides an unreliable service to the application. TCP provides a connected, reliable transport service to the application. This layer will make a message sent from application A in node Y reach application A1 in node Z, either reliably or unreliably (while IP only takes care of carrying the packet from node Y to node Z).
You will need to read a little about the OSI layered model and the TCP/IP layered model.
If you need to get more info I can address you to a training I have about IPv6 with a good introduction to networking: http://www.slideshare.net/rodolk/networking-tcpip-stack-introduction-ipv6
TCP is a protocol, known as "Transmission Control Protocol" - by specification it has features in place which makes sure that transmitted data is checked. On the other hand, there are things such as UDP, aka "User Datagram Protocol" which also works on top of IP - by specification this method does not check any transmitted data, so it's less useful where files must be fully intact (more utilised for video streaming, where some lost frames is acceptable, as opposed to binary file transfers where incorrect data means corruption and the whole file would be useless).
On to IP, IP is an addressing protocol, allowing a network to address and communicate with any machine that lives within it. IP stands for Internet Protocol, and it defines the fundamental way that two machines communicate over the "internet". It does not define how communications are handled, in ways such as being checked for data integrity, etc.
So, to summarise, the TCP and UDP are just extensions of IP. It is entirely possible, however, to have a socket based TCP or UDP connection, and I expect it's also possible to have some sort of MAC address protocol (as opposed to an IP address protocol). I don't know of any protocols which are similar to IP, but I imagine they do exist. In reality, using TCP over something other than IP is entirely unlikely. If you're going to the effort to create a custom protocol, chances are you'll want it fully custom and won't want to stick to design specifications designed for another protocol layer.
Note that calling it a "TCP/IP" connection is probably only ever used for legacy reasons. A lot of terms like this still exist because before the technology "bubble" growth, there were competing alternatives to IP. Even today, there is IPv6 which is technically an alternative to IPv4. It's also possible that we might one day outgrow IPv6, and at that point in time, there could be something other than IP to worry about.

TCP or UDP for lots of connections?

I want to create a P2P network with the following characteristics:
low latency is not really important
loosing packages is okay
the nodes would only send tiny amounts of data around
there will be no NAT/firewall issues, every node has an open port on its public ip
every node is connected to every other node
Usually I would use TCP for anything not time-critical but the last requirements causes the nodes to have lots of open connections for a long time. If I remember correctly, using TCP to connect to 1000 servers would mean I had to use 1000 ports to handle these connections. UDP on the other hand, would only require a single port for each node.
So my question is: Is TCP able to handle the above requirements in a network with e.g. 1000 nodes without tweaking the system? Would UDP be better suited in this case? Is there anything else that would be a deal-breaker for either protocol?
With UDP you control the "connection state" and it is pretty much the best way to do anything peer to peer related IF you have a high number of nodes or care about bandwidth, memory and CPU overhead. By moving all the control to your application in regards to the "connection state" of each node you minimize the amount of wasted resources by making it fit your needs exactly.
You will bypass a lot of operating system specific weirdness that limits the effectiveness of TCP with high numbers of connections. There is TIME_WAIT bloat and tens to hundreds of OS specific settings which will need tweaking for every user of your P2P app if it needs those high numbers. A test app I made which allowed you to use UDP with ack or TCP showed only a 10% difference in performance regardless of operating system using UDP. TCP performance was always lower than the best UDP and its performance varied wildly by over 600% depending upon the OS. With tweaks you can make most OS perform roughly the same using TCP but by default most are not properly tweaked.
So in my opinion it is harder to make a reliable UDP P2P network compared to TCP but it is often needed. However I would only advise that route it if you were quite experienced with networking as there are a lot of "gotchas" to deal with. There are libraries which help with this like Raknet or Enet. They provide ways to do reliable UDP but it still takes a higher amount of networking knowledge to know how this all ties in together, whereas with TCP it is mostly hidden from you.
In a peer to peer network you often have messages like NODE PINGs that you may not care if each one is always received, you just care if you have received one recently. ie You may send a ping every 10 seconds, and disconnect the node after 60 seconds of no ping. This would mean you would need 6 ping packets in a row to fail, which is highly unlikely unless the node is really down. If you received even one ping in that 60 second period then the node is still active. A TCP implementation of this would have involved more latency and bandwidth as it makes sure EACH ping message gets through and will block any other data going out until it does. And since you cannot rely on TCP to reliably tell you if a connection is dead, you are forced to add similar PING features for TCP, on top of all the other things TCP is already doing extra with your packets.
Games also often have data that if its not received by a client it is no big deal because there are more packets coming in a few milliseconds which will invalidate any missed packets. ie Player is moving from A to Z over a time span of 1 second, his client sends out each packet, roughly 40 milliseconds apart ABCDEFG__I__KLMNOPQRSTUVWXYZ Do we really care if we miss "H and J" since every 40ms we are receiving updates? Not really, this is where prediction can come into it, but this is usually not relevant to most P2P projects. If that was TCP instead of UDP then it would have increased bandwidth requirements and added latency to the rest of the packets being received as the data will be resent until it arrives, on top of the extra latency it is already adding by acking everything.
Essentially you can lower latency and network overhead for many messages in a peer to peer network using UDP. However there will always be some messages which NEED to be sent reliably and that requires you to basically implement some reliable way to get packets to that node, similar to that of TCP. And this is where you need some level of expertise if you want a reliable peer to peer network. Some things to look into include sequencing packets with a number, message ACKs, etc.
If you care a lot about efficiency or really need tens of thousands of connections then implementing your specific protocol in UDP will always be better than TCP. But there are cases to be made for TCP, like if the time to make the project matters or if you are a new to network programming.
If I remember correctly, using TCP to connect to 1000 servers would mean I had to use 1000 ports to handle these connections.
You remember wrong.
Take a web server which is listening on port 80 and can handle 1000s of connections at the same time on this single port. This is because a connection is defined by the tuple of {client-ip,client-port,server-ip,server-port}. And while server-ip and server-port are the same for all connections to this server the client-ip and client-port are not. Even if the client-ip is the same (i.e. same client) the client would pick a different source port.
... with e.g. 1000 nodes without tweaking the system?
This depends on the system since each of the open connections needs to preserve the state and thus needs memory. This might be a problem for embedded systems with only little memory.
In any case: if your protocol is just sending small messages and if packet loss, reordering or duplication are acceptable than UDP might be the better choice because the overhead (connection setup, ACK..) is smaller and it takes less memory. You could also use a single socket to exchange data with all 1000 nodes whereas with TCP you would need a separate socket for each connection (socket is not the same as port!). Using only a single socket might allow for a simpler application design.
I want to amend the answer by Steffen with a few points:
1000 connections are nothing for any normal computer and OS.
UDP fits your requirements. It might be easier to program because it is message oriented. TCP provides a stream of bytes. You need to layer a messaging protocol on top of that which is not that easy. Also, you need to handle broken TCP connections by reconnecting.
Ports are not scarce. No problem with consuming 1000 ports.

TURN Server WebRTC Hardware / Network Requirements

I am currently being challenged (mentally) with the idea of scaling out a TURN server(s) from a novelty to something that scales based on call volume.
Subsequently, I am trying to understand the requirements from a hardware, network, and application perspective and it's associated cost. I have some specific questions come to mind I would love the community's help with wrapping my brain around.
1) Are the ports reused for multiple destinations simultaneously? It seems to me conceptually if it's UDP and the source ip/port, destination ip/port is enough of 4-tuple for uniqueness, I could see it theoretically possible but I've never seen documentation around this.
2) What is the time (if any) before ports are reused. If the TURN server has allocated ports 1234 and 1235 for a given time, when one or both of those sockets close, how long will it be before the TURN server re-allocates those ports as a result of another request.
3) How should I think about the hardware requirements (specifically CPU and memory) of my TURN server(s) as a function of number of concurrent calls?

Does listen() backlog affect established TCP connections?

Would it be naive to create a TCP socket with a listen backlog set to minimum as a way of rate limiting new incoming connections? The server workload in question doesn't expect many new connections at any time but spends a lot of time servicing long open persistent connections. It appears that new incoming connections shouldn't affect established connections, though I've been unable to find any definitive answer in any text. Is it possible for failed new incoming connections to create some kind of TCP traffic congestion on the server with the packets it's receiving or are they dropped fast enough that it has no effect on any buffers or other part of the network stack?
Specifically the platform in use is Linux, and although it may be handled differently in different OSs, I expect them to all behave roughly the same.
EDIT What I mean by the "same" is that backlog doesn't affect established connections, though I do understand Linux discards them while Windows sends a reset.
Does listen() backlog affect established TCP connections?
It affects established connections that the server hasn't accepted yet via accept(), only in the sense that it limits the number of such connections that can exist.
Would it be naive to create a TCP socket with a listen backlog set to minimum as a way of rate limiting new incoming connections?
All it would accomplish would be to unnecessarily fail some connecting clients. They won't get any service until your server gets around to it anyway, and once the backlog queue fills they are rate-limited by your service code anyway. There is no particular reason why shortening the queue would have any beneficial effect. The other problem with the idea is that it isn't readily possible to determine what the minimum actually is, or whether you succeeded in setting it as the backlog queue length.
It appears that new incoming connections shouldn't affect established connections, though I've been unable to find any definitive answer in any text.
That is correct. There is no reason why it should affect them: that's why you won't find it written down anywhere, any more than the fact that the phase of the moon doesn't affect it either.
Is it possible for failed new incoming connections to create some kind of TCP traffic congestion on the server with the packets it's receiving
No.
or are they dropped fast enough that it has no effect on any buffers or other part of the network stack?
They're not dropped. They simply aren't even created if they won't fit on the backlog queue. Ergo their resource consumption at the server is zero.
Specifically the platform in use is Linux, and although it may be handled differently in different OSs, I expect them to all behave roughly the same.
They don't. On Windows, an incoming connection when the backlog queue is full causes an RST to be issued. On other platforms it is simply ignored.
What you describe are several types of attacks like flooding, syn attacks and other goodies resulting in denial of service.
This topic is not easy, because protection has to be implemented in all the layers, including TCP. For instance a SYN attack, fiddling with the sequence numbers, ... . At that point the packet in question already came a long way, through the ethernet layer and ip layer, bottom line it is taking resources. So if your system is under attack, the attacking packets are in your data stream just like the good ones are. The faster you can detect a packet is faulty and drop it, the better. Usually a system that is under attack will be slower. Well at least the systems that I have worked with.
Some attacks try to bring your system in a faulty state permanently, this by exploiting bugs. For instance TCP has a receive queue, if packets are constantly arriving out of order they will be stored in that receive queue. If the missing packet never arrives, then this receive queue could keep on growing and growing. Without the proper defense , this would lead to the system going completely out of resources.
There are specialised tools (codenumicon for instance) to check the vulnerability of a TCP stack implementation. You can assume that the one on linux has been properly tested using similar tools.
An attack can also occur on the application layer. If you have a TCP server and it allows only a limited amount of sessions. A malicious user can simply take all the connections simply by establishing all the connections and then not doing anything with it. So you have to create some defense as well. Weather or not you set this limit very low or high does not change a thing. A malicious user will try anything to bring your system down. You need to built in defense anyway. You can connect to a webserver (HTTP) simply using telnet. If you don't send anything the server's defense will come into play and close the connection.
So bringing the amount of possible connections to a low value and thinking that this in itself is a form of protection is indeed naive.
Is it possible for failed new incoming connections to create some kind of TCP traffic congestion on the server with the packets it's receiving or are they dropped fast enough that it has no effect on any buffers or other part of the network stack?
They are using resources of your machine and will make your system run slower.
It appears that new incoming connections shouldn't affect established connections, though I've been unable to find any definitive answer in any text.
If it is normal user trying to establish a connection, even if he is doing it continuously, retrying upon failure. The influence will be minimal, close to nothing. But a malicious user that is flooding connections attempts will have influence on the system performance, because the system has to spent time identifying those flawed packets and dropping them asap.

How to use UDP from a machine with only NAT access

I have a machine, with no external IP address, it will need to send UDP packets to the outside world. Only NAT access.
Will this work?
It is really hard to prototype this in our environment.
It is still really under construction.
Any thoughts on how I can prototype this?
Most of the home network configurations in the world are made of a PC with an internal IP and a router with a public IP that NAT the internal one. (Independently of UDP/TCP or whatever protocol that needs to go out)
I see no troubles with it
It should work.
Ensure that for the socket created, set the TTL (time-to-live) to a value that is sufficiently large to cover the possible number of router hops to reach the destination. Running traceroute to the destination IP will give you a rough idea on the number of hops. Note that this value can change depending on network conditions. So it's best to set this to a larger value. Refer to sockets IOCtl API documentation for the syntax for setting TTL.
Finally, remember that UDP is not a reliable protocol. So even after taking the necessary steps above, the packet may not reach its destination. However, if the entire network, including the intermediary routers, is within a controlled environment, such as a corporate intranet, chances of packet drop are minimal.
If you want to add reliability on top of UDP, you can adopt a NAK based algorithm where packets are stamped with a sequence number. Various resources might advise you that if you need to add reliability over UDP you should consider TCP, but my experience has been that if your app runs in a controlled environment with very minimal chance of packet drops and you need fast connection setup and tear down, adding a lightweight reliability over UDP has its merits. Also TCP connections take up valuable space in the OS kernel whereas UDP don't. This could also be a consideration if you want to support very large number of 'connections' in a constrained environment.
At the end of the day you need to experiment a little to figure out what works best for you.
To prototype, I would set up a NAT server using something like Linux and then start working from there. Real world traffic scenarios that you want to simulate will determine where the client and server are to be located on either side of the NAT. That is, if the traffic should go through an ISP or all within a controlled environment.
HTH