Calculate Bandwidth for Each Application (TCP/UDP) on Windows? - sockets

Is it possible to calculate bandwidth of networking applications (TCP/UDP) by using Win32 API on Windows?
AFAIK TCP bandwidth calculation can be done by using GetPerTcpConnectionEStats
function. But I could not find any function for UDP.
PSPing seems to be working for UDP, therefore I assume that there must be a way to measure the bandwidth of UDP connections somewhere. Am I right?
Is there any other coding method to gather instant UDP bandwidth usage per connection on Windows?

Related

network realtime audio transmission with latency < 100 ms

I need a solution to transmit one audio channel (mono) 44.1 Khz, 16 bit resp. 88.2 KB/Sec with less than 100 ms latency to a remote location. The application is for a remote concert. My software is built on Windows 10 with Max (cycling74), Java, Unity and C#. I want send data as well between the applications especially from Max to Java and Unity. I found zeromq and apache kafka as possible frameworks. I would appreciate to get some hints which tools could be suited. As I am not very experienced in network programming minimizing the effort for an implementation is also an important concern.
ZeroMQ is capable of sub millisecond latency on an internal network. However, I would recommend raw UDP sockets. UDP doesn't retransmit lost packets and has very low overhead compared to TCP (used by ZMQ).
You may also need to do traffic prioritization on your network to ensure a reasonable latency, but with the small quantity of data you are using it might not have a significant effect (this all depends on your specific network). I would start with implementing a UDP socket then if you are seeing unacceptable latencies try to optimize the network.

TCP or UDP for lots of connections?

I want to create a P2P network with the following characteristics:
low latency is not really important
loosing packages is okay
the nodes would only send tiny amounts of data around
there will be no NAT/firewall issues, every node has an open port on its public ip
every node is connected to every other node
Usually I would use TCP for anything not time-critical but the last requirements causes the nodes to have lots of open connections for a long time. If I remember correctly, using TCP to connect to 1000 servers would mean I had to use 1000 ports to handle these connections. UDP on the other hand, would only require a single port for each node.
So my question is: Is TCP able to handle the above requirements in a network with e.g. 1000 nodes without tweaking the system? Would UDP be better suited in this case? Is there anything else that would be a deal-breaker for either protocol?
With UDP you control the "connection state" and it is pretty much the best way to do anything peer to peer related IF you have a high number of nodes or care about bandwidth, memory and CPU overhead. By moving all the control to your application in regards to the "connection state" of each node you minimize the amount of wasted resources by making it fit your needs exactly.
You will bypass a lot of operating system specific weirdness that limits the effectiveness of TCP with high numbers of connections. There is TIME_WAIT bloat and tens to hundreds of OS specific settings which will need tweaking for every user of your P2P app if it needs those high numbers. A test app I made which allowed you to use UDP with ack or TCP showed only a 10% difference in performance regardless of operating system using UDP. TCP performance was always lower than the best UDP and its performance varied wildly by over 600% depending upon the OS. With tweaks you can make most OS perform roughly the same using TCP but by default most are not properly tweaked.
So in my opinion it is harder to make a reliable UDP P2P network compared to TCP but it is often needed. However I would only advise that route it if you were quite experienced with networking as there are a lot of "gotchas" to deal with. There are libraries which help with this like Raknet or Enet. They provide ways to do reliable UDP but it still takes a higher amount of networking knowledge to know how this all ties in together, whereas with TCP it is mostly hidden from you.
In a peer to peer network you often have messages like NODE PINGs that you may not care if each one is always received, you just care if you have received one recently. ie You may send a ping every 10 seconds, and disconnect the node after 60 seconds of no ping. This would mean you would need 6 ping packets in a row to fail, which is highly unlikely unless the node is really down. If you received even one ping in that 60 second period then the node is still active. A TCP implementation of this would have involved more latency and bandwidth as it makes sure EACH ping message gets through and will block any other data going out until it does. And since you cannot rely on TCP to reliably tell you if a connection is dead, you are forced to add similar PING features for TCP, on top of all the other things TCP is already doing extra with your packets.
Games also often have data that if its not received by a client it is no big deal because there are more packets coming in a few milliseconds which will invalidate any missed packets. ie Player is moving from A to Z over a time span of 1 second, his client sends out each packet, roughly 40 milliseconds apart ABCDEFG__I__KLMNOPQRSTUVWXYZ Do we really care if we miss "H and J" since every 40ms we are receiving updates? Not really, this is where prediction can come into it, but this is usually not relevant to most P2P projects. If that was TCP instead of UDP then it would have increased bandwidth requirements and added latency to the rest of the packets being received as the data will be resent until it arrives, on top of the extra latency it is already adding by acking everything.
Essentially you can lower latency and network overhead for many messages in a peer to peer network using UDP. However there will always be some messages which NEED to be sent reliably and that requires you to basically implement some reliable way to get packets to that node, similar to that of TCP. And this is where you need some level of expertise if you want a reliable peer to peer network. Some things to look into include sequencing packets with a number, message ACKs, etc.
If you care a lot about efficiency or really need tens of thousands of connections then implementing your specific protocol in UDP will always be better than TCP. But there are cases to be made for TCP, like if the time to make the project matters or if you are a new to network programming.
If I remember correctly, using TCP to connect to 1000 servers would mean I had to use 1000 ports to handle these connections.
You remember wrong.
Take a web server which is listening on port 80 and can handle 1000s of connections at the same time on this single port. This is because a connection is defined by the tuple of {client-ip,client-port,server-ip,server-port}. And while server-ip and server-port are the same for all connections to this server the client-ip and client-port are not. Even if the client-ip is the same (i.e. same client) the client would pick a different source port.
... with e.g. 1000 nodes without tweaking the system?
This depends on the system since each of the open connections needs to preserve the state and thus needs memory. This might be a problem for embedded systems with only little memory.
In any case: if your protocol is just sending small messages and if packet loss, reordering or duplication are acceptable than UDP might be the better choice because the overhead (connection setup, ACK..) is smaller and it takes less memory. You could also use a single socket to exchange data with all 1000 nodes whereas with TCP you would need a separate socket for each connection (socket is not the same as port!). Using only a single socket might allow for a simpler application design.
I want to amend the answer by Steffen with a few points:
1000 connections are nothing for any normal computer and OS.
UDP fits your requirements. It might be easier to program because it is message oriented. TCP provides a stream of bytes. You need to layer a messaging protocol on top of that which is not that easy. Also, you need to handle broken TCP connections by reconnecting.
Ports are not scarce. No problem with consuming 1000 ports.

Selection of software architecture or lib to optimize UDP client on a point-to-point network

My goal is to drop as few UDP datagrams as possible. Shocker, I know, ;-)
Here is my circumstance which is a bit different from the general network server/client optimization questions for which I see a lot of discussion:
I am writing socket code for a process which has one singular goal: grab UDP packets received by my Gigabit Ethernet NIC and get them into application RAM with as high a bandwidth as possible (i.e. minimize packet drops/loss).
The network is point-to-point without any firewalls, switches, routers, etc - just a single Cat6 cable connecting the UDP datagram generator/server (an embedded system) with my Windows 7 PC, the datagram receiver/client. I can control the transmitted datagrams-per-second via some controls on the datagram generator. The datagrams are sent to the broadcast address (FF.FF.FF.FF).
I've successfully achieved about 250-300Mbits/sec (30% of the theoretical 1G Ethernet bandwidth) without any packets getting dropped or order-scrambled by using lean-and-mean code based on the built-in Winsock2 commands: select() and recvfrom() as outlined in the sample code for those commands on MSDN.
(I've already adjusted the receive buffer to be very large using the setsockopt() command, and this helped considerably.) But I am still wanting to maximize performance and eager to hear thoughts from this community on whether or not I should expect noticeable gains from trying the following:
Asynchronous I/O, such as boost::asio. From what I gather, this library appears to be more for optimizing applications which have to serve a lot of different sockets to different machines. Should I expect much in terms of single-socket UDP receive performance from switching from Winsock to an asynchronous I/O architecture?
Packet size: If I make the effort to change the packet size by modifying the embedded code that is generating the packets, would it be likely to improve performance by having lots of smaller packets or fewer large/jumbo packets?
Broadcast/multicast/unicast: is one destination address type likely to perform better than others?
Or is 300Mbps about the limit that I should be expecting for actual throughput on a 1G physical link?
Any other recommendations on low-hanging fruit to improve performance, or expectations on what type of performance is feasible.
Thanks all!

What is the serial communication speed using TCP sockets?

I am communicating using TCP sockets. One computer is using Windows commands, and the other is running on Linux using Python. The two computers are able to communicate, but I'm not sure what the bit rate is. I never set any bit rate. Is there a default bit rate? Can it be changed?
EDIT: It seems that the programs can accommodate a variety of bit rates. For example, 10 Mbps Ethernet or 100 Mbps Ethernet. I thought (wrongly) that the bit rate had to be set, as it does for serial communication over USB. It does not have to be set.
TCP implements the SLOW START and CONGESTION AVOIDANCE procedures by which it tests the capacity of the underlying network and tries to exploit it, as much as possible. The process is fairly complex but, bottom line, TCP will try to use all the available bandwidth. The reference standard is the Internet Engineering Task Force rfc 5681: https://www.rfc-editor.org/rfc/rfc5681

How efficient BSD sockets to writew server client application on iphone?

I am creating server-client application for iPhone. I want to communicate between two application in same network.
For this functionality i am planning to use sockets. How much efficient BSD sockets to use with iphone??
Is there any option available to implement same functionality?
Thanks,
Jim.
See this thread on the iPhone Dev SDK website.
The CF networking stuff is a bit
confusing and hard to wrap your head
around. But, it's just a set of
functions that use BSD sockets and
integrate them with the run loop so
you don't have to create threads. You
can still use BSD sockets yourself
Basically, the thread points out multiple libraries / frameworks which integrate well with the iPhone environment, and using any of them instead of straight BSD sockets probably won't make any significant performance difference. Unless you're really comfortable with low level socket programming you're probably better of with one of the libraries.
Don't do premature optimization - use whatever socket interface you are most comfortable with and which will help you get the job done quickly and produce clear, maintainable code.
EDIT
In response to Jim's question below:
Yes. There are a few factors that determine the system wide and per process socket limits. Take a look at this article for a discussion of these issues. iPhone and Linux are both Unix based OS's so they probably share some of these system admin related socket limitations, but you'll have to look up the system specific admin details.
Second, there are limits imposed by the architecture of UDP and TCP. Basically, UDP and TCP are both limited to 2^16 listening sockets per machine IP address since a listening socket is defined by a fixed 32 bit IP address and a 16 bit Port number. However, since a connected socket is defined by the set of [ [src IP] [src Port] [dst IP] [dst Port] ] then the number of connected sockets you can theoretically have on a single machine IP is significantly higher, something like 2^64 although practically your OS would probably barf way before you hit that limit.