How do I get useful data from a UDP socket using GNAT.Sockets in Ada? - sockets

Summary:
I am writing a server in Ada that should listen and reply to messages received over UDP. I am using the GNAT.Sockets library and have created a socket and bound it to a port. However, I am not sure how to listen for and receive messages on the socket. The Listen_Socket function is for TCP sockets and it seems that using Stream with UDP sockets is not recommended. I have seen the receive_socket and receive_vector procedures as alternatives, but I am not sure how to use them or how to convert the output to a usable format.
More details:
I am writing a server that should reply to messages that it gets over UDP. A minimal example of what I have so far would look like this:
with GNAT.Sockets;use GNAT.Sockets;
procedure udp is
sock: Socket_Type;
family: Family_Type:=Family_Inet;
port: Port_Type:=12345;
addr: Sock_Addr_Type(family);
begin
Create_Socket(sock,family,Socket_Datagram);
addr.Addr:=Any_Inet_Addr;
addr.Port:=port;
Bind_Socket(sock,addr);
-- Listen_Socket(sock); -- A TCP thing, not for UDP.
-- now what?
end UDP;
For a TCP socket, I can listen, accept, then use the Stream function to get a nice way to read the data (as in 'Read and 'Input). While the Stream function still exists, I have found an archive of a ten year old comp.lang.ada thread in which multiple people say not to use streams with UDP.
Looking in g-socket.ads, I do see alternatives: the receive_socket and receive_vector procedures. However, the output of the former is a Stream_Element_Array (with an offset indicating the length), and the latter has something similar, just with some kind of length associated with each Stream_Element.
According to https://stackoverflow.com/a/40045312/7105391, the way to change these types into a stream, is to not get them in the first place, and instead get a stream, which is not particularly helpful here.
Over at this github gist I found , Unchecked_Conversion is being used to turn the arrays into strings and vice versa, but given that the reference manual (13.13.1) says that type Stream_Element is mod <implementation-defined>;, I'm not entirely comfortable using that approach.
All in all, I'm pretty confused about how I'm supposed to do this. I'm even more confused about the lack of examples online, as this should be a pretty basic thing to do.

Related

Switching to websocket from simple Berkeley socket

I have a simple socket implementation which uses the standard low-level Berkeley socket functions (bind, listen, accept, read, etc).
This socket listens on a port, let's say X.
Now what I'm trying to achieve is to make Simple-WebSocket-Server to listen also on this port X.
Of course this is not possible by nature - I know.
My intention is this: In my simple socket implementation I would detect if the connected client (after accepting) is my client or a websocket one, then if I find the client to be a websocket one, I would pass the whole thing into this library to behave the same as like it was the one have accepted this client.
What would be good to hand over the socket's fd, along with a first bytes that my socket read before noticing a websocket request.
I'm a bit stuck on would be the best to do this, but I don't want to reimplement the whole websocket stuff for sure.
The tricky thing here is that the Simple-WebSocket-Server does its own accept, so I don't there is a way to had over to it a fd together with an array of "first bytes".
Some approaches I could think of:
modify the Simple-WebSocket-Server so that instead of closing a non-WS client or timing-out, it makes a call to your library
instead use something like websocketpp to create your own websocket server, and then select between the two servers (I did something similar for one of my own projects, where I had to detect the socket protocol from the first byte and then select an appropriate protocol handler wampcc protcol selector)
or, you could try to have the Simple-WebSocket-Server listen on a different port Y; you also listen on X, and if detect a web-socket client on X, you internally create an internal pair of queues and then open a connection to localhost:Y, and proceed to transfer bytes between the pair of sockets; this way you don't need to modify the Simple-WebSocket-Server code.

Perl and server-client sockets

Background/Context
I have a two scripts, a server-side script that can handle multiple clients, and a client-side script that connects to the server. Any of the clients that send a message to the server have that message copied/echoed to all the other connected clients.
Where I'm stuck.
This afternoon, I have been grasping at thin air searching for a thorough explanation with examples covering all that there is for Perl and TCP sockets. A surprising large number of results from google still list articles from 2007-2012 . It appears there originally there was the 'Socket' module , and over time IO::Socket was added , then IO::Select. But the Perldoc pages don't cover or reference everything in one place, or provide sufficent cross referencing links. I gather that most of the raw calls in Socket have an equivalent in IO::Socket. And its possible (recommended ? yes/no?) to do a functional call on the socket if something isn't available via the OO modules...
Problem 1. The far-side/peer has disconnected / the socket is no longer ESTABLISHED?
I have been trying everything I ran across today, including IO::Select with calls to can_read, has_exception, but the outputs from these show no differences regardless if the socket is up or down - I confirmed from netstat output that the non-blocking socket is torn down instantly by the OS (MacOS).
Problem 2. Is there data available to read?
For my previous perl client scripts, I have rolled my own method of using sysread (https://perldoc.perl.org/functions/sysread.html) , but today I noticed that recv is listed within the synopsis on this page near the top https://perldoc.perl.org/IO/Socket.html , but there is no mention of the recv method in the detailed info below...
From other C and Java doco pages, I gather there is a convention of returning undef, 0, >0, and on some implementations -1 when doing the equivalent of sysread. Is there an official perl spec someone can link me to that describes what Perl has implemented? Is sysread or recv the 'right' way to be reading from TCP sockets in the first place?
I haven't provided my code here because I'm asking from a 'best-practices' point of view, what is the 'right' way to do client-server communication? Is polling even the right way to begin with? Is there an event-driven method that I've somehow missed..
My sincere apologies if what I've asked for is already available, but google keeps giving me the same old result pages and derivative blogs/articles that I've already read.
Many thanks in advance.
And its possible (recommended ? yes/no?) to do a functional call on the socket if something isn't available via the OO modules...
I'm not sure which functional calls you refer to which are not available in IO::Socket. But in general IO::Socket objects are also normal file handles. This means you can do things like $server->accept but also accept($server).
Problem 1. The far-side/peer has disconnected / the socket is no longer ESTABLISHED?
This problem is not specific to Perl but how select and the socket API work in general. Perl does not add its own behavior in this regard. In general: If the peer has closed the connection then select will show that the socket is available for read and if one does a read on the socket it will return no data and no error - which means that no more data are available to read from the peer since the peer has properly closed its side of the connection (connection close is not considered an error but normal behavior). Note that it is possible within TCP to still send data to the peer even if the peer has indicated that it will not send any more data.
Problem 2. Is there data available to read?
sysread and recv are different the same as read and recv/recvmsg or different in the underlying libc. Specifically recv can have flags which for example allow peeking into data available in the systems socket buffer without reading the data. See the the documentation for more information.
I would recommend to use sysread instead of recv since the behavior of sysread can be redefined when tying a file handle while the behavior of recv cannot. And tying the file handle is for example done by IO::Socket::SSL so that not the data from the underlying OS socket are returned but the decrypted data from the SSL socket.
From other C and Java doco pages, I gather there is a convention of returning undef, 0, >0, and on some implementations -1 when doing the equivalent of sysread. Is there an official perl spec someone can link me to that describes what Perl has implemented?
The behavior of sysread is well documented. To cite from what you get when using perldoc -f sysread:
... Returns the number of bytes
actually read, 0 at end of file, or undef if there was an error
(in the latter case $! is also set).
Apart from that, you state your problem as Is there data available to read? but then you only talk about sysread and recv and not how to check if data is available before calling these functions. I assume that you are using select (or IO::Select, which is just a wrapper) to do this. While can_read of IO::Select can be used to get the information in most cases it will return the information only from the underlying OS socket. With plain sockets this is enough but for example when using SSL there is some internal buffering done in the SSL stack and can_read might return false even though there are still data available to read in the buffer. See Common Usage Errors: Polling of SSL sockets on how to handle this properly.

does socket.h's recvfrom() function recieve individual packets?

The man page for recvfrom summarizes its behavior as "receive a message from a socket". If the socket is of type SOCK_STREAM or SOCK_DGRAM, is "message" synonymous with "packet"? If not, how does it differ?
My first thought was recvfrom works on stream sockets just because there's no reason to ban it. As in the famous quote:
"Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things." – Doug Gwyn
If it did what I expected it to do, you could use it like a combination read() and getpeername() since it returns the sender's address. That might be considered clever.
But then I tried it on Linux, and it didn't work that way. The source address buffer was unchanged and the length indicator was set to 0.
So now I'm not sure what to say except don't use it on stream sockets. It's not meant for them.
ADDENDUM: And no, even in my wildest dreams I wouldn't have expected it to give you access to packet boundaries in a TCP stream. Data that has been put through the tcp receiving mechanism simply isn't made of packets anymore.

How to determine who wrote what to a socket, given 2 writers?

Say one part of a program writes some stuff to a socket and another part of the same program reads stuff from that same socket. If an external tool writes to that very same socket, how would I differentiate who wrote what to the socket (using the part that reads it)? Would using a named pipe work?
If you are talking about TCP the situation you describe is impossible, because connections are 1-to-1. If you mean UDP, you can get the sender address by setting the appropriate flags in the recvmesg() function.

Send a zero-data TCP/IP packet using Java

My goal is to send a TCP packet with empty data field, in order to test the socket with the remote machine.
I am using the OutputStream class's method of write(byte[] b).
my attempt:
outClient = ClientSocket.getOutputStream();
outClient.write(("").getBytes());
Doing so, the packet never show up on the wire. It works fine if "" is replaced by " " or any non-zero string.
I tried jpcap, which worked with me, but didn't serve my goal.
I thought of extending the OutputStream class, and implementing my own OutputStream.write method. But this is beyond my knowledge. Please give me advice if someone have already done such a thing.
If you just want to quickly detect silently dropped connections, you can use Socket.setKeepAlive( true ). This method ask TCP/IP to handle heartbeat probing without any data packets or application programming. If you want more control on the frequency of the heartbeat, however, you should implement heartbeat with application level packets.
See here for more details: http://mindprod.com/jgloss/socket.html#DISCONNECT
There is no such thing as an empty TCP packet, because there is no such thing as a TCP packet. TCP is a byte-stream protocol. If you send zero bytes, nothing happens.
To detect broken connections, short of keepalive which only helps you after two hours, you must rely on read timeouts and write exceptions.