What's the difference between endpoint and socket? - sockets

Almost every definition of socket that I've seen, relates it very closely to the term endpoint:
wikipedia:
A network socket is an internal endpoint for sending or receiving data
at a single node in a computer network. Concretely, it is a
representation of this endpoint in networking software
This answer:
a socket is an endpoint in a (bidirectional) communication
Oracle's definition:
A socket is one endpoint of a two-way communication link between two
programs running on the network
Even stackoverflow's definition of the tag 'sockets' is:
An endpoint of a bidirectional inter-process communication flow
This other answer goes a bit further:
A TCP socket is an endpoint instance
Although I don't understand what "instance" means in this case. If an endpoint is, according to this answer, a URL, I don't see how that can be instantiated.

"Endpoint" is a general term, including pipes, interfaces, nodes and such, while "socket" is a specific term in networking.

IMHO - logically (emphasis added) "socket" and "endpoint" are same, because they both are concatenation of an Internet Address with a TCP port. Strictly technically speaking in core-networking, there is nothing like "endpoint", there is only "socket". Go on, read more below...
As #Zac67 highligted, "socket" is a very specific term in networking - if you read TCP RFC (https://www.rfc-editor.org/rfc/rfc793) then you won't find even a single reference of "endpoint", it only talks about "socket". But when you come out of RFC world, you will hear a lot about "endpoint".
Now, they both talk about combination of IP address and a TCP port, but you can't say someone that "please give me socket of your application", you will say "please give me endpoint of your application". So, IMHO the way someone can understand difference between Socket and Endpoint is - even though both refer to combination of IP address and TCP port, but you use term "socket" when you are talking in context of computer processes or in context of OS, otherwise when talking with someone in general you will use "endpoint".

I am a guy coming from embedded systems world and low level things,
Endpoint is a hardware buffer constructed at the far end of your machine, what does that mean?
YourMachine <---------------> Device
[Socket] ----------------> [Endpoint]
[Endpoint] <---------------- [Socket]
Both sockets and endpoints are endpoints but socket is an endpoint that resides on the sender which here your machine[Socket is a word used to distinguish between sender and receiver]
OK, now that we know it is a buffer, what is the relation between buffers and networking?
Windows
When you create a socket on Windows, the OS returns a handle to that socket, in fact socket is actually a kernel object, so in Windows when you create a kernel object the returned value is a handle which is used to access that object, usually handles are void* which is then casted into numerical value that Windows can understand, now that you have access to socket kernel object, all IO operations are handled in the OS kernel and since you want to communicate with external device then you have to reach kernel first and the socket is exactly doing this, in other words, socket creates a socket object in the kernel = creates an endpoint in the kernel = creates a buffer in the kernel, that buffer is used to stream data through wires later on using OS HAL(Hardware abstraction layer) and you can talk to other devices and you are happy
Now, if the other device doesn't have communication buffer = endpoint, then you can't communicate with it, even if you open a socket on your end, it has to be two way data communication = Send and Receive
Another example of accessing IO peripheral is accessing RAM (Main memory), two ways of accessing RAM, either you access process stack or access process heap, the stack is not a kernel object in fact you can access stack directly without reaching OS kernel, simply by subtracting a value from RSP(Stack pointer register), example:
; This example demonstrates how to allocate 32 contiguous bytes from stack on Windows OS
; Intel syntax
StackAllocate proc
sub rsp, 20h
ret
StackAllocate endp
Accessing heap is different, the heap is a kernel object, so when you call malloc()/new operator in your code a long call stack is called through windows code, the point is reaching RAM requires kernel help, the stack allocation above is actually not reaching RAM, all I did is subtracting a number of an existing value in RSP which is inside CPU so I did not go outside, the heap object in kernel returns a handle that Windows use to manage fragmented memory and in the end returns a void* to that memory
Hope that helped

Related

How does socketcan handle arbitration?

I pretty much understand how the CAN protocol works -- when two nodes attempt to use the network at the same time, the lower id can frame gets priority and the other node detects this and halts.
This seems to get abstracted away when using socketcan - we simply write and read like we would any file descriptor. I may be misunderstanding something but I've gone through most of the docs (http://lxr.free-electrons.com/source/Documentation/networking/can.txt) and I don't think it's described unambiguously.
Does write() block until our frame is the lowest id frame, or does socketcan buffer the frame until the network is ready? If so, is the user notified when this occurs or do we use the loopback for this?
write does not block for channel contention. It could block because of the same reasons a TCP socket write would (very unlikely).
The CAN peripheral will receive a frame to be transmitted from the kernel and perform the Medium Access Control Protocol (MAC protocol) to send it over the wire. SocketCAN knows nothing about this layer of the protocol.
Where the frame is buffered is peripheral/driver dependent: the chain kernel-driver-peripheral behaves as 3 chained FIFOs with their own control flow mechanisms, but usually, it is the driver that buffers (if it is needed) the most since the peripheral has less memory available.
It is possible to subscribe for errors in the CAN stack protocol (signaled by the so called "error frames") by providing certain flags using the SocketCAN interface (see 4.1.2 in your link): this is the way to get error information at application layer.
Of course you can check for a correctly transmitted frame by checking the loopback interface, but it is overkill, the error reporting mechanism described above should be used instead and it is easier to use.

Can I have more than 32 netlink sockets in kernelspace?

I have several kernel modules which need to interact with userspace. Hence, each module has a Netlink socket.
My problem is that these sockets interfere with each other. This is because all of them register to the same Netlink address family (because there aren't many available to begin with - the max is 32 and more than half are already reserved) and also because they all bind themselves to the same pid (the kernel pid - zero).
I wish there were more room for address families. Or, better yet, I wish I could bind my sockets to other pids. How come Netlink is the preferred user-kernel channel if only 32 sockets can be open at any one time?
libnl-3's documentation says
The netlink address (port) consists of a 32bit integer. Port 0 (zero) is reserved for the kernel and refers to the kernel side socket of each netlink protocol family. Other port numbers usually refer to user space owned sockets, although this is not enforced.
That last claim seems to be a lie right now. The kernel uses a constant as pid and doesn't export more versatile functions:
if (netlink_insert(sk, 0))
goto out_sock_release;
I guess I can recompile the kernel and increase the address family limit. But these are kernel modules; I shouldn't have to do that.
Am I missing something?
No.
Netlink's socket count limit is why Generic Netlink exists.
Generic Netlink is a layer on top of stock Netlink. Instead of opening a socket, you register a callback on an already established socket, and listen to messages directed to a "sub"-family there. Given there are more available family slots (1023) and no ports, I'm assuming they felt a separation between families and ports was unnecessary at this layer.
To register a listener in kernelspace, use genl_register_family() or its siblings. In userspace, Generic Netlink can be used via libnl-3's API (though it's rather limited, but the code speaks a lot and is open).
You are confused by MAX_LINKS variable name. It is not a "maxumum amount of links", it's a "maximum amount of families". The things you listed are netlink families or IOW netlink groups. There are indeed 32 families. Each family dedicated to serve some particular purpose. For example NETLINK_SELINUX is for SELinux notification and NETLINK_KOBJECT_UEVENT is for kobject notifications (these are what udev handles).
But there are no restrictions on number of sockets for each of the family.
When you call netlink_create it's checking your protocol number which in case of netlink socket is netlink family like NETLINK_SELINUX. Look at the code
static int netlink_create(struct net *net, struct socket *sock, int protocol,
int kern)
{
...
if (protocol < 0 || protocol >= MAX_LINKS)
return -EPROTONOSUPPORT;
...
This is how your MAX_LINKS is using.
Later, when to actually create socket it invokes __netlink_create, which in turn calls sk_alloc, which in turn calls sk_prot_alloc. Now, in sk_prot_alloc it allocates socket by kmallocing (netlink doesn't have its own slab cache):
slab = prot->slab;
if (slab != NULL) {
sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO);
if (!sk)
return sk;
if (priority & __GFP_ZERO) {
if (prot->clear_sk)
prot->clear_sk(sk, prot->obj_size);
else
sk_prot_clear_nulls(sk, prot->obj_size);
}
} else
sk = kmalloc(prot->obj_size, priority);

What's the purpose of SIO_RCVALL option of WSAIoctl in raw sockets?

As I know, If one creates a raw socket (with type of SOCK_RAW) and binds it to a network interface, he can receive all the IP traffic on that interface only by using the recvfrom function.
But, in many examples for sniffers I saw a call to the winsock's function WSAIoctl with control code SIO_RCVALL to perform.
So, what's the purpose of that control mode in the mission of sniffing?
Read the documentation. SIO_RCVALL is what enables the NIC to be sniffed, and to some extent what level of sniffing is allowed.

Socket read with pcap

I have a socket bound to a NIC that I am using to capture packets in a pcap_loop.
I have a separate process running that eventually does a "read" on that same device, but only after a unix local pipe is ready to be read. Is it correct to say that the read() on the device from the 2nd process will read everything that's ready, not just one packet at a time, even though my other process is set up to use pcap_loop to read a packet at a time?
I have a socket bound to a NIC that I am using to capture packets in a pcap_loop.
You say "socket", so I'm guessing that this is Linux (it could also be IRIX, but that's a lot less likely, and the answer is the same in either case; other OSes don't use sockets in libpcap, the native capture mechanism on those OSes uses mechanisms other than sockets).
I have a separate process running that eventually does a "read" on that same device, but only after a unix local pipe is ready to be read. Is it correct to say that the read() on the device from the 2nd process will read everything that's ready, not just one packet at a time,
No. A PF_PACKET socket returns one packet at a time from a read().
There is, by the way, no guarantee that reading from the socket with a read and handling the same socket in libpcap at the same time will work. Libpcap might be using the memory-mapped mechanism to get the packets; unless you've seen documentation on how the memory-mapped mechanism works with read()s done elsewhere, or have read the Linux kernel code enough to figure out how it works, you might not want to assume it'll work the way you want.
If, however, this is FreeBSD, as suggested (but not stated) by the tag, then what libpcap is using is a BPF device, *NOT* a socket. A read() will give you an entire bufferful of packets, and the read()s done by libpcap will give libpcap an entire bufferful of packets, even if it happens to call your callback once per packet. The same issues of read() vs. memory-mapped access could occur, but the memory-mapped BPF in later versions of FreeBSD isn't, by default, used by libpcap.

How to set a timeout in connect/send ? ( as400 iseries v5r4, rpg )

From this rpg socket tutorial we created a socket client in rpg that calls a java server socket
The problem is that connect()/send() operations blocks and we have a requirement that if the connect/send couldn't be done in a matter of a second per say, we have to just log it and finish.
If I set the socket to non-blocking mode (I think with fnctl), we are not fully understanding how to proceed, and can't find any useful documentation with examples for it.
I think if I do connect to a non-blocking socket I have to do select(..., timeout) which tells us if the connect succeed and/ we are able to send(bytes). But, if we send(bytes) afterwards, as it is now a non-blocking socket (which will immediately return after the call), how do I know that send() did the actual sending of the bytes to the server before closing the socket ?
I can fall back to have the client socket in AS400 as a Java or C procedure, but I really want to just keep it in a simple RPG program.
Would somebody help me understand how to do that please ?
Thanks !
In my opinion, that RPG tutorial you mention has a slight defect. What I believe is causing your confusion is the following section's code:
...
Consequently, we typically call the
send() API like this:
D miscdata S 25A
D rc S 10I 0
C eval miscdata = 'The data to send goes here'
C eval rc = send(s: %addr(miscdata): 25: 0)
c if rc < 25
C* for some reason we weren't able to send all 25 bytes!
C endif
...
If you read the documentation of send() you will see that the return value does not indicate an error if it is greater than -1 yet in the code above it seems as if an error has occurred. In fact, the sum of the return values must equal the size of the buffer assuming that you keep moving the pointer into the buffer to reflect what has been sent. Look here in Beej's Guide to Network Programming. You might also like to look at Richard Stevens' book UNIX Network Programming, Volume 1 for really detailed explanations.
As to the problem of determining if the last send before close() did the actual send ... well the paragraph above explains how to determine what portion of the data was sent. However, calling close() will attempt to send all unsent data unless SO_LINGER is set.
fnctl() is used to control blocking while setsockopt() is used to set SO_LINGER.
The abstraction of network communications being used is BSD sockets. There are some slight differences in implementations across OS's but it is generally quite homogeneous. This means that one can generally use documentation written for other OS's for the broad overview. Most of the time.