How I can listen for a tcp port in kernel space (freebsd)? - sockets

As the title says, How I can work with tcp sockets in kernel space?
Is there any tricky notes?

Look at ng_ksocket. Even if you will not use netgraph, it is a nice implementation of kernel-level socket manipulation.

I found a post on linuxjournal.com about networking in kernel.
May be helpful.

Related

Where does the Linux kernel allocate a useable port to the client for TCP/UDP

I already know how TCP/UDP send/receive sockets in the linkage like tcp_sendmsg()->tcp_transmit_skb()->ip_queue_xmit(), but I haven't find where does the kernel allocate a useable port to the client. Since port is only useful for transmission layer, I think it must be here or there, but I haven't see it.
Can anyone help me? Thanks for any help.

TCP - possible for same client-side port to be used for different connections by different applications simlutaneously?

Is it possible in TCP for different processes not sharing the same executable image (so no fork() for example) to use a same client-side port on Windows, Linux or OSX? This is specifically related to the socket options SO_REUSEADDR and SO_REUSEPORT set using setsockopt() I believe.
As far as I've read, I believe it is possible for the same process/image to do this, but I haven't found information as to multiple processes/images. I would imagine it is theoretically possible since each socket is defined by the 5-valued tuple [IP_PROTO, src_ip:src_port, dst_ip:dst_port]. So I would assume that, as long as multiple TCP connections sharing a client-side port are not made to the same dst_ip:dst_port, this would be theoretically possible.
UDP is not connection-oriented and has no real distinction between client and server, so for UDP this question doesn't make a lot of sense.
For TCP, you can use SO_REUSEADDR to bind mulitple clients to the same port, but why would you want to? Normally you leave the client unbound before making a connection and let the kernel pick an unused port for you.

Use socket to comunicate between process in the same host, is it ok go with UDP?

I want to make sure, if use UDP within a host, should i care about the package lost issue?
Yes, you should care about reliability when using UDP. Even if you use it on localhost, there is no guaranty that packets are not lost because the Protocol Specifications does not ensure this. It also depends on the implementation of UDP in Operating System. It may behave differently on different operating systems as far as reliability is concerned because there is no rule defined in UDP specifications.
Also the order of delivery in UDP is not ensured so you should also take care of it while using UDP for IPC.
I hope it helps.

Listening to a tcp port in iphone

I need to listen to a TCP port and collect the binary data from the port in my iphone how this could be done . I had searched a lot for the same but did not find anything worth, please help me any links, or sample code be greatly appretiated
The only thing you need to do is open a socket, you have two options:
Create the socket in pure C:
Sockets in C
Or use the classes that Apple provides to work with sockets:
Introduction to Stream Programming Guide for Cocoa
If you are going to do something simple, the first option is the easiest
There is a very useful socket library called GCDAsyncSocket on github that can be used to make both TCP/UDP sockets and comes with delegate methods for reading and writing data
GCDAsyncSocket

Would we see any speedup using ZeroMQ instead of TCP Sockets if the two processes communicating are on the same machine?

I understand that 0MQ is supposed to be faster than TCP Sockets in a clustered environment and I can see where that would be the case (I think that's what they're referring to when they say "Faster than TCP, for clustered products and supercomputing" on the 0MQ website). However, will I see any kind of speedup using 0MQ instead of TCP sockets to communicate between two processes running on the same machine?
Well, the short version is give it a try.
The slightly longer version is that writing TCP sockets can be hard, there's a lot of things that are easy to have problems with, but 0MQ guarantees the message will be delivered in its entirety. It is also written by experts in network sockets, which, with the best will in the world, you probably aren't, and they use a few advanced tricks to speed things along.
You are not actually running on one machine because the VM is treated as a separate machine. This means that TCP sockets have to run through the whole network stack and cannot take shortcuts like they do when you communicate between processes on one machine.
However, you could try UDP multicast under ZeroMQ to see if that speeds up your application. UDP is less reliable on a wide area network, but in a closed environment of a VM talking to its host, you can safely skip all the TCP reliability stuff.
I guess IPC should be faster than TCP. If you are willing to move to a single process, INPROC is definitely going to be much faster.
I think (have not tested) that the answer is false as ZMQ likely uses the same standard C lib and adds some message headers.
Same thing applies for UDP.
Same thing applies for IPC pipes.
ZMQ could be just as fast but since it adds headers it is not likely.
Now it could be a different story if you really need some sort of header and ZMQ has implemented it better than you. Like for message size or type, but I digress.