Let's say you have a game server, UDP only, running on a server which has both IPv4 and IPv6 addresses. The server starts up, calls getaddrinfo() to loop through available addresses, and let's say it grabs the IPv6 address. So it creates it's socket on IPv6 and waits for packets from clients.
A client tries to connect, but this time it's using an IPv4 address entered by the user. It creates a IPv4 socket, and tries to connect to the server. Does the difference actually matter? Or does the difference between a IPv4 socket and a IPv6 socket stop at the local machine?
Likewise, if the client has already created, say, a IPv6 socket for use (because getaddrinfo() said it was valid), and then it calls getaddrinfo() to find a server's address, what if it only gets a IPv4 result? I know I can tell getaddrinfo() to only give IPv6 results, but what if the server doesn't have an IPv6 address? Are UDP clients supposed to close and recreate their sockets to match the server address format? Or am I guaranteed to get the address format I ask for?
(I welcome any documentation references that answer these questions. I've been researching for hours but haven't found clear answers to these points yet.)
By default, the IPv6 UDP socket will send and receive only IPv6 UDP packets, so your IPv4 client would be out of luck.
However, if you are running on a dual-stack machine (and you probably are), you can enable IPv4-mapped IPv6 addresses on the socket, and then you can use that socket to handle both IPv4 and IPv6 UDP traffic. IPv4 packets will show up as coming from a specially-formed IPv6 address (with a form like e.g. "::ffff:192.168.0.5") but otherwise they are handled the same way as any IPv6 UDP client would be.
You can enable IPv4-mapped IPv6 addresses on your socket like this:
int v6OnlyEnabled = 0; // we want v6-only mode disabled, which is to say we want v6-to-v4 compatibility
if (setsockopt(s, IPPROTO_IPV6, IPV6_V6ONLY, &v6OnlyEnabled, sizeof(v6OnlyEnabled)) != 0) perror("setsockopt");
The other approach would be to create separate IPv4 and IPv6 sockets as necessary (on the client and/or server), but as long as you have a dual-stack networking stack available, the IPv4-mapped IPv6-addresses approach is much easier to work with.
Related
According to native GRPC implementation, here: https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_server_custom.cc#L381
all ipv4 addresses are changed into ipv6 before the socket is opened here:
https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/tcp_uv.cc#L191
At least on CentOS this leads to grpc server being unable to start listening.
Can anyone please clarify why was this conversion made, why not just forward all ipv4 and ipv6 addresses straight into the socket and let them be managed by their original addresses?
Right now I am considering commenting this conversion out because I need this server on the environment without ipv6, but I am not sure if I am going to break anything by doing this...maybe there is some hidden dependency on the fact that we are always listening on ipv6 address?
I need to find out the local IP address on a very old platform which doesn't have getifaddrs(). Lots of people recommend to do this by creating a UDP socket, connect() it to a destination IP and then use getsockname() to find out the local IP address (see here)
Is there a way to make this work on systems that don't have Internet access? AFAIU, for UDP sockets connect() just sets the default destination address for the socket without actually establishing any connection. So can I just use some random destination IP with connect() because it doesn't connect immediately for UDP sockets anyway? I.e. can I just pass 8.8.8.8 as a dummy IP to connect() and getsockname() will still allow me to get the local IP address even if there is no Internet connection?
I don't understand what it means to bind a socket to any address other than 127.0.0.1 (or ::1, etc.).
Am I not -- by definition -- binding the socket to a port on my own machine.. which is localhost?
What sense does it make to bind or listen to another machine or IP address's port?
Conceptually, it just doesn't make sense to me!
(This has proven surprisingly hard to Google... possibly because I'm not Googling the right terms.)
Binding of a socket is done to address and port in order to receive data on this socket (most cases) or to use this address/port as the source of the data when sending data (for example used with data connections in FTP server).
Usually there are several interfaces on a specific machine, i.e. the pseudo-interface loopback where the machine can reach itself, ethernet, WLAN, VPN... . Each of these interfaces can have multiple IP addresses assigned. For example, loopback usually has 127.0.0.1 and with IPv6 also ::1, but you can assign others too. Ethernet or WLAN have the IP addresses on the local network, i.e. 172.16.0.34 or whatever.
If you bind a socket for receiving data to a specific address you can only receive data sent to this specific IP address. For example, if you bind to 127.0.0.1 you will be able to receive data from your own system but not from some other system on the local network, because they cannot send data to your 127.0.0.1: for one any data to 127.0.0.1 will be sent to their own 127.0.0.1 and second your 127.0.0.1 is an address on your internal loopback interface which is not reachable from outside.
You can also bind a socket to a catch-all address like 0.0.0.0 (Ipv4) and :: (Ipv6). In this case it is not bound to a specific IP address but will be able to receive data send to any IP address of the machine.
I tried to convert the destination address to IPv6 format as ::ffff:IPv4. And use a socket of AF_INET6 type. It gives error: Network Unreachable. But using the same technique I am able to communicate from IPV4 to IPV6
Thanks for your help in advance.
IPv4 and IPv6 are separate protocols. They don't talk to each other.
On some operating systems you can use an IPv6 socket and accept incoming IPv4 connections on it. But that is just a software thing to make code development easier for server code. I have never seen that work for client code. You'll have to create the right socket type for that.
Usually you resolve a hostname using DNS, you'll get multiple answers (IPv4 and IPv6), you iterate over them creating the required socket type and try to connect. If it works you use that socket, if not you do the next iteration which creates a new socket etc.
If you code that is sensitive to delays you might want to implement the happy eyeballs algorithm.
On most systems, PF_INET6 sockets are able to communicate with IPv4 addresses by using addresses in the ::FFFF:0:0/96 range. However, this is only done at the level of the sockets library: the actual on-the-wire data are plain IPv4 packets (as though you had used an PF_INET socket), there is no protocol conversion in the network.
The error you receive indicates that you have no IPv4 route to the requested destination. This probably indicates that your host doesn't have an IPv4 default route. There is no solution to that — without IPv4 connectivity, there is nothing you can do to reach an IPv4 address.
(Please close this thread if duplicate, I tried hard, but couldn't find any matching question)
It seems some OS/platforms listen for both IPv6 and IPv4 (tcp) connections when bound to IPv6 widcard address, while some listen for only IPv6, as mentioned in:
for the V6Only argument
with the following lines:
If your platform does not support disabling this option but you still want to listen for both AF_INET and AF_INET6 connections you will have to create two listening sockets, one bound to each protocol
And in the section "How IPv6 Works on a Java Platform"
And as per correct answer in this SO question
Now I want to write some Perl code which can determine if the underlying OS/platform listens for both IPv6 and IPv4 (if bound to IPv6), if yes, I'll bind to IPv6 only, if no I'll create 2 sockets (1 for IPv4 and another for IPv6).
I wonder what could be the best way for this?
As mentioned in IO::Socket::IP I could use
if( IO::Socket::IP->CAN_DISABLE_V6ONLY ) {
...
}
else {
...
}
But, I am not sure if it will tell me exactly
if the underlying OS/platform listens for both IPv6 and IPv4 (if bound to IPv6)
Or it will just tells that the
IPV6_V6ONLY socket option cannot be disabled
it just tells that the "IPV6_V6ONLY socket option cannot be disabled"
This is correct.
What you can do is attempt to create a PF_INET6 socket, then if successful, check its IPV6_V6ONLY socket option. If that's true, then the socket is listening only on IPv6 and not IPv4 as well, so you'll have to create another. If it is false, then the socket will capture both IPv6 and IPv4, and this one socket will be sufficient.