I'd like to count bytes in/out from a socket. For a regular socket, I can just total the size change of buffer effected by recv() and the return value of send(). How do you do this with IO::Socket::SSL?
IO::Socket::SSL does not provide you with that view to the underlying TCP socket since it let just OpenSSL handle the TCP socket (via Net::SSLeay). In order to get such details you would need to handle read/write on the TCP socket yourself and then interact with the SSL layer using the BIO interface. Of course, this is way more complex than just using the abstraction offered by IO::Socket::SSL.
Related
Is it possible for a UDP socket (SOCK_DGRAM) to access checksum field from an incoming UDP packet and check for errors? I know that we can do that using raw sockets (SOCK_RAW), but I want to know whether we can do it using datagram sockets. If so, how can we do it in C?
If you create a normal UDP socket you don't have access to the UDP header and thus also not to the checksum. But the kernel will already discard packets where the checksum is incorrect so you would not see these packets anyway.
You can't do it using datagram sockets (SOCK_DGRAM), because the TCP/IP stack removes those UDP header bytes from the received buffer before passing it up to higher layer APIs. You need to use raw sockets (SOCK_RAW) so that these bytes are preserved.
To create a packet socket, following socket() function call is used (socket type and protocol may be different):
socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL))
And to create a stream socket, following call is used:
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)
My question is why use htons() to specify protocol when creating a packet socket and not when creating socket of AF_INET or AF_INET6 family? Why not use
socket(AF_INET, SOCK_XXX, htons(IPPROTO_XXX))
to create a STREAM or DATAGRAM socket as used when creating a packet socket or vice-versa. What is different with the use of the protocols in the two calls to socket() function as both the calls are used to create sockets, one for packet socket and the other for socket at TCP level?
First, like most other network parameters that are passed to the kernel (IP addresses, ports, etc), the parameters are passed in their "on-the-wire" format so that underlying software doesn't need to manipulate them before comparing/copying/transmitting/etc. (For comparison, consider that AF_PACKET and SOCK_RAW are parameters to the kernel itself -- hence "native format" is appropriate -- while the ETH_P_xxx value is generally for "comparison with incoming packets"; it just so happens that ETH_P_ALL is a special signal value saying 'capture everything'.)
Second, interpretation of the protocol is potentially different by address family. A different address family could choose to interpret the protocol in whatever form made sense for it. It just so happens that Ethernet and IP have always used big-endian (and were important/ubiquitous enough that big-endian came to be called network order).
Third, the protocol number in the AF_INET world (i.e. Internet Protocol) only occupies a single byte so it doesn't make sense to specify a byte-ordering.
My code is passed an open socket. This socket could be either a TCP socket (AF_INET) or a Unix Domain Socket (AF_UNIX).
Depending on the domain of the socket, it will need to be handled differently. In particular if the socket is bound to an address then I might want to accept incoming connections in a diffent way.
What is the best way to determine whether the socket I have been passed is a unix domain socket or a TCP socket? The solution would need to work on OS X and Linux at least.
getsockopt appears to allow getting the type of the socket (e.g. SOCK_STREAM etc) but not the domain.
getsockname will return a zero length for unix domain sockets on OSX, but this is officially a bug and the Linux behaviour is different.
The first member of the struct sockaddr returned by getsockname is sa_family, just test that against the symbolic constants. The bug on OSX lets you assume the unix domain when the returned address structure is zero bytes, for other platforms and domains, just check the returned structure.
getsockname() is the only cross-platform socket API to query a socket for its locally bound address, and thus its address family.
On Windows, at least, you can use getsockopt(SOL_SOCKET, SO_PROTOCOL_INFO) to retrieve a WSAPROTOCOL_INFO struct, which has an iAddressFamily field. Maybe there are similar platform-specific APIs on other OSes.
I was brushing up my sockte programming knowledge and came across a doubt.
First let me explain my understanding of sockets.
Socket binding associates the socket with port.
Socket binding helps kernel to identify the process to whom it should forward the incoming packet.
In connection oriented communication socket establishment is as below
At server side
socket()-->bind()-->listen()-->accept().....
client side is
socket()-->connect-->......
My question is why client need not bind to a socket. In client case if it send a request it has to get a response to its socket and kernel has to forward it to its process.For these things to happen isn't binding needed?If not how kernel will understand to whom to send the response packet?
Also in connection less client call bind socket.Why is it needed here?
My question is why client need not bind to a socket.
Because the kernel does the bind automatically when you call connect(), if you haven't bound the socket yourself.
Also in connectionless client call bind socket. Why is it needed here?
Because otherwise the socket isn't bound to an IP address:port, so it can't send or receive anything. It has no path to the outside world.
You always open a socket first. This is the path through the kernel. The connect call for say TCP happens after the socket is made.
Look at TCP versus UDP clients.
TCP
s = socket(options....)
connect(s)
send(s, data)
UDP
s = socket(options....)
send(s, data)
bind("0.0.0.0", 0) (all interfaces, any port) is implicit if you call connect(...) or listen(...) without an explicit bind(...).
All sockets must be bound to a local port even when connectionless so that bi-directional communication is possible (even if you're not going to do so).
I am creating a UDP socket client in C (unicast) and is wondering why recvfrom() has a struct sockaddr * argument in which in the man page says,
A null pointer, or points to a sockaddr structure in which the sending address is to be stored.
Is it possible that I could receive a message from a different server other than the one I sendto? If yes, how to create this scenario?
If no, is it correct to say that this argument is only useful when broadcast mode is used?
Yes, this is perfectly possible. The reason for this is that UDP is not stream-based, but packet-based. Every packet is treated without any history (other packets sent or received).
For this reason you may also open a UDP port and then send packets to different hosts from it. However, I do not remember how well this is supported by the API.
The UDP socket will recvfrom() any host sending to this one with correct port unless you explicitly connect(), in which case you can just write() and read(), and get errors upon received ICMP messages.
Considering you always have two parties in UDP, it seems rather obvious that someone has to recvfrom() first.