How to let kernel choose a port number in the range (1024,5000) in TCP socket programming - sockets

When I run the following code:
struct sockaddr_in sin;
int addrlen;
addrlen=sizeof(sin);
memset(&sin, 0, sizeof(sin));
sin.sin_family = AF_INET;
sin.sin_addr.s_addr=inet_addr("123.456.789.112");
sin.sin_port=htons(0); // so that the kernel reserves a unique port for us
sd_server = socket(PF_INET, SOCK_STREAM, 0);
bind(sd_server, (struct sockaddr *) &sin, sizeof(sin));
getsockname(sd_server,(struct sockaddr*)&sin,&addrlen);
port=ntohs(sin.sin_port);
printf("port number = %d\n",port);
According to sockets, I must get a port number between 1024 and 5000, but I'm getting port numbers around 30,000.
What should I do?

Port numbers have a range of 0..65535 (although often 0 has special meaning). In the original BSD TCP implementation, only root can bind to ports 1..1023, and dynamically assigned ports were assigned from the range 1024..5000; the others were available for unprivileged static assignment. These days 1024..5000 is often not enough dynamic ports, and IANA has now officially designated the range 49152..65535 for dynamic port assignment. However even that is not enough dynamic ports for some busy servers, so the range is usually configurable (by an administrator). On modern Linux and Solaris systems (often used as servers), the default dynamic range now starts at 32768. Mac OS X and Windows Vista default to 49152..65535.
linux$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
solaris$ /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
32768
65535
macosx$ sysctl net.inet.ip.portrange.first net.inet.ip.portrange.last
net.inet.ip.portrange.first: 49152
net.inet.ip.portrange.last: 65535
vista> netsh int ipv4 show dynamicport tcp
Protocol tcp Dynamic Port Range
---------------------------------
Start Port : 49152
Number of Ports : 16384

Look at sysctl for your platform. Here is what I see on my Mac:
nickf#goblin:~$ sysctl -a|grep port
...
net.inet.ip.portrange.hilast: 65535
net.inet.ip.portrange.hifirst: 49152
net.inet.ip.portrange.last: 65535
net.inet.ip.portrange.first: 49152
net.inet.ip.portrange.lowlast: 600
net.inet.ip.portrange.lowfirst: 1023
...
These are the ranges kernel peeks ephemeral ports from.

Related

[rping]rdma_resolve_addr: Cannot assigin requested address

[rping]rdma_resolve_addr: Cannot assigin requested address
Modify net.ipv4.ip_ local_ port_ Range, increase the available ports, and then rping can be used. RDMA connection of our project is also normal.
But at the beginning, this parameter net.ipv4.ip_ local_ port_ Range=10001 65535, we modify it to net.ipv4.ip_ local_ port_ Range=10000 65535 can't be changed to net.ipv4.ip_ local_ port_ Range=9900 65535
What is the reason?
We use netstat - anp to check that there are not many ports occupied by Linux;
Ss | wc - l At that time, the number of connections was only 200, far from the range of parameters: net.ipv4.ip_ local_ port_ range = 10001 65535
I want to know how the source port is allocated when RDMA connects? It is in net.ipv4.ip_ local_ port_ Range=10001 65535 Is the available port selected in this range? If so, why is the range of available ports so large (that is, there are many more available ports)? There will also appear: rdma_ resolve_ Addr: Can't assign requested addresses?
If it is not in this net.ipv4.ip_ local_ port_ Select in range, why net.ipv4.ip_ local_ port_ When the range of range is changed to be larger, rping is OK?
Still, the source port selection of RDMA network connection is the same as net.ipv4.ip_ local_ port_ Range irrelevant?
When rdma_ resolve_ After addr succeeds, add rdma_ get_ src_ The port obtained by port is sometimes not in net.ipv4.ip_ local_ port_ In range, from this result, the local port of RDMA connection is not limited by this parameter

Binding to UDP socket *from* a specific IP address

I have packets coming from a specific device directly connected to my machine. When I do a tcpdump -i eno3 -n -n, I can see the packets:
23:58:22.831239 IP 192.168.0.3.6516 > 255.255.255.255.6516: UDP, length 130
eno3 is configured as 192.168.0.10/24
When I set the socket the typical way:
gOptions.sockfd = socket(AF_INET, SOCK_DGRAM, 0);
memset((void *)&gOptions.servaddr, 0, sizeof(struct sockaddr_in));
gOptions.servaddr.sin_family = AF_INET;
inet_pton(AF_INET, gOptions.sourceIP, &(gOptions.servaddr.sin_addr));
gOptions.servaddr.sin_port = htons(gOptions.udpPort);
bind(gOptions.sockfd, (struct sockaddr *)&gOptions.servaddr, sizeof(struct sockaddr_in));
And I use the sourceIP of "255.255.255.255" on port "6516" - it connects and reads.
What I want to do, however, is bind such that I am limiting my connection from the source IP - "192.168.0.3". I have figured out how to connect on the device using either device name ("eno3") of the iface of that device ("192.168.0.10") - but that doesn't help as I may have multiple devices connected to "192.168.0.10" that blab on that port, but I only want the packets from 192.168.0.3 for port 6516.
I thought s_addr - part of sin.addr - was the source IP... but it is not.
You can't bind() to a remote IP/port, only to a local IP/port. So, for what you have described, you need to bind() to the IP/port where the packets are being sent to (192.168.0.10:6516).
Now, you have two options to choose from. You can either:
use recvfrom() to receive packets, using its src_addr parameter to be given each sender's IP/port, and then you can discard packets that were not sent from the desired sender (192.168.0.3:6516).
or, use connect() to statically assign the desired sender's IP/port (192.168.0.3:6516), and then you can use recv() (not recvfrom()) to receive packets from only that sender.

Increasing max outbound connections on CentOS

I learned from this articl: Scaling to 12 Million Concurrent Connections: How MigratoryData Did It that it's possible to make more than 64K connections from a single client with multiple IP.
Now I have an AWS ec2 machine that has 10 IPs for testing. The config in /etc/sysctl.conf is
fs.nr_open = 2000000
fs.file-max = 2000000
And the config in /etc/security/limits.d/def.conf is
* soft nofile 2000000
* hard nofile 2000000
I start one process (written in C) and create 60000 connections from the first IP address. Everything works fine.
Than I started another process and try to create 60000 connections from the second IP address but it gets error when the number of connections reaches about 7500 (total number: 67500). The error message is Connection timed out.
The problem doesn't seem to be file descriptor limitations because I still can open/read/write files in the client machine. But any out going connection to any remote server gets timed out.
The problem is not in the server side because the server can accept many more connection from different client machine.
It looks like there's some kind of settings rather than number of open files that limits the number of outgoing connections. Can anyone help?
In order to be able to open more than 65536 TCP socket connections from your client machine, you have to use indeed more IP addresses.
Then, for each TCP socket connection, you should tell the kernel which IP address and which ephemeral port to use.
So, after the TCP client creates a socket and before it connects to the remote address, the TCP client should explicitly bind one of the local IP addresses available on your client machine to the socket.
The MigratoryData Benchmark Tools are written in Java so I cannot provide you the exact code that we use to open any number of TCP connections on the client side. But, here is a quick example written in C++.
Suppose your TCP server listens on 192.168.1.1:8800 and suppose 192.168.1.10 is one of the IP addresses of your client machine, then you can create a socket connection from the local IP address 192.168.1.10 and an ephemeral local port -- let's say 12345 -- to the remote IP address 192.168.1.1 and the remote port 8800 using something like:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include<sys/socket.h>
#include <arpa/inet.h>
int main(int argc, char *argv[])
{
int n, sockfd;
char buffer[1024];
struct sockaddr_in localaddr, remoteaddr;
sockfd = socket(AF_INET, SOCK_STREAM, 0);
localaddr.sin_family = AF_INET;
localaddr.sin_addr.s_addr = inet_addr("192.168.1.10");
localaddr.sin_port = htons(12345);
bind(sockfd, (struct sockaddr *) &localaddr, sizeof(localaddr));
remoteaddr.sin_family = AF_INET;
remoteaddr.sin_addr.s_addr = inet_addr("192.168.1.1");
remoteaddr.sin_port = htons(80);
connect(sockfd, (struct sockaddr *) &remoteaddr, sizeof(remoteaddr));
n = read(sockfd, buffer, 512);
// ...
close(sockfd);
return 0;
}

Socket Programming: bind() system call

While studying computer networks as the course subject, my concept was that operating system distinguishes a packet based on the destination port and delivers it to application which is running on that port.
Later I came to know that we can connect to two different destinations (DestinationIP:DestinationPort) using the same source(SourceIP:SourcePort).
tcp 0 0 192.168.1.5:60000 199.7.57.72:80 ESTABLISHED 1000 196102 10179/firefox
tcp 0 0 192.168.1.5:60000 69.192.3.67:443 ESTABLISHED 1000 200361 10179/firefox
tcp 0 0 192.168.1.5:60000 69.171.234.18:80 ESTABLISHED 1000 196107 10179/firefox
tcp 0 0 192.168.1.5:60000 107.21.19.182:22 ESTABLISHED 1000 196399 10722/ssh
tcp 0 0 192.168.1.5:60000 69.171.234.18:443 ESTABLISHED 1000 201792 10179/firefox
tcp 0 0 192.168.1.5:60000 69.192.3.34:443 ESTABLISHED 1000 200349 10179/firefox
tcp 0 0 127.0.0.1:4369 127.0.0.1:51889 ESTABLISHED 129 12036 1649/epmd
tcp 0 0 192.168.1.5:60000 69.192.3.58:443 ESTABLISHED 1000 200352 10179/firefox
tcp 0 0 192.168.1.5:60000 74.125.236.88:80 ESTABLISHED 1000 200143 10179/firefox
tcp 0 0 192.168.1.5:60000 174.122.92.78:80 ESTABLISHED 1000 202935 10500/telnet
tcp 0 0 192.168.1.5:60000 74.125.236.87:80 ESTABLISHED 1000 201600 10179/firefox
Going little more into depths, I came to know that if an application uses bind() system call to bind a socket descriptor with a particular IP and port combination, then we can't use the same port again. Otherwise if a port is not binded to any socket descriptor, we can use the same port and IP combination again to connect to a different destination.
I read in the man page of bind() syscall that
bind() assigns the address specified to by addr to the socket referred to by the file descriptor sockfd.
My question are:
When we don't call bind() syscall generally while writing a client program then how does the OS automatically selects the port number.
When two different applications use the same port and IP combination to connect to two different servers and when those servers reply back, how does the OS finds out that which packet needs to be redirected to which application.
When we don't call bind() syscall generally while writing a client
program then how does the OS automatically selects the port number
The OS picks a random unused port (not necessarily the "next" one).
how does the OS finds out that which packet needs to be redirected to
which application
Each TCP connection is identified by a 4-tuple: (src_addr, src_port, dst_addr, dst_port) which is unique and thus enough to identify where each segment belongs.
EDIT
When we don't call bind() syscall generally while writing a client
program then how does the OS automatically selects the port number.
Sometime before "connecting" in the case of a TCP socket. For example, Linux has the function inet_csk_get_port to get an unused port number. Look for inet_hash_connect in tcp_v4_connect.
For 1: OS just picks the next available port.
For 2: It is done based on the dst port. Client applications will connect to same server over different client ports
I think for a client program OS maintains a table with socket fd(opened by client) and server IP+port after establishment of TCP connection.So whenever server replies back, OS can pick up the socket fd against the particular server IP+PORT and data is written onto the socket fd. So server reply can be available to the client on this particular socket fd.

Socket remote connection problem C

I wrote a simple server application in C. This server do nothing except print the received message, then exit. Here is the code
int listenfd,connfd,n;
struct sockaddr_in servaddr,cliaddr;
socklen_t clilen;
char *mesg = (char*) malloc(1000*sizeof(char));
listenfd=socket(PF_INET,SOCK_STREAM,0);
bzero(&servaddr,sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = INADDR_ANY;
servaddr.sin_port=htons(20600);
bind(listenfd,(struct sockaddr *)&servaddr,sizeof(servaddr));
listen(listenfd,5);
clilen=sizeof(cliaddr);
connfd = accept(listenfd,(struct sockaddr *)&cliaddr,&clilen);
n = (int) recvfrom(connfd,mesg,1000,0,(struct sockaddr *)&cliaddr,&clilen);
sendto(connfd,mesg,n,0,(struct sockaddr *)&cliaddr,sizeof(cliaddr));
printf("-------------------------------------------------------\n");
mesg[n] = 0;
printf("Received the following:\n");
printf("%s\n",mesg);
printf("-------------------------------------------------------\n");
close(connfd);
close(listenfd);
I managed to establish a connection using telnet and running
telnet 192.168.1.2 20600
where 192.168.1.2 is the local ip of the server.
The machine runs behind a router ZyXel p-660HW-61 (192.168.0.1).
The problem is I cannot reach the server if I specify the public ip of the machine (151.53.150.45).
I set NAT configuration to the server local ip on all port from 20000 to 21000
http://img593.imageshack.us/img593/3959/schermata20110405a22492.png
port 20600 seems to be open, according to canyouseeme.org/ and yougetsignal.com/tools/open-ports/ (in fact I can read in the console that a packet has been received), but if I run
telnet 151.53.150.45 20600
I get a "Connection Refused" error.
Firewall is disabled, both on the router and on the server machine (that is the same running telnet).
Any help?
If you are typing:
telnet 151.53.150.45 20600
from the LAN rather than from the WAN, then your NAT most probably does not handle hairpin situations properly. This means it only expects you to use the translated address from the WAN.
The solution is check whether you can change the configuration of your NAT to enable usage of translated address on the LAN too (it is sometimes a requirement for P2P systems). If such functionalities are not available, then you need a new NAT.