Inside sock_poll it will call sock->ops->poll().
For TCP socket, sock->ops->poll() is tcp_poll
For UDP socket, sock->ops->poll() is udp_poll or datagram_poll()
For netlink socket, which function get called for sock->ops->poll() ?
static __poll_t sock_poll(struct file *file, poll_table *wait)
{
struct socket *sock = file->private_data;
__poll_t events = poll_requested_events(wait), flag = 0;
………………………………………………….
return sock->ops->poll(file, sock, wait) | flag;
}
Related
I want to code a server with multiple clients with non-blocking UDP sockets and have an issue with turning a client socket into POLLOUT mode ...
A client first sends an initial datagram to the server and then only reads from server. Server shall broadcasts datagrams to multiple clients in nonblocking way. So I have an array of
struct pollfd clients_polled[MAX_NUMBER_OF_CLIENTS + 1]
Then initialize it this way
/* init client_polled array */
for (i = 0; i < MAX_NUMBER_OF_CLIENTS; ++i) {
clients_polled[i].fd = -1;
clients_polled[i].events = POLLIN;
clients_polled[i].revents = 0;
}
create listening socket
clients_polled[0].fd = socket(AF_INET, SOCK_DGRAM, 0);
then I bind it and call fcntl. Then I enter infinite loop in which I first call
poll_ret = poll(clients_polled, MAX_NUMBER_OF_CLIENTS, timeout);
and if there is POLLIN event on listetning socket I read it and add new client and then I send over some stuff to all active clients. So say the first client comes in so after read from it I want to set its event flag from POLLIN to POLLOUT so that the server can send to it in nonblocking way:
clients_polled[1].events = POLLOUT;
clients_polled[1].fd = ??
How shall I set .fd for it? Shall I assign it to the original clients_polled[0].fd or create a new socket like
clients_polled[0].fd = socket(AF_INET, SOCK_DGRAM, 0);
Either way I get .revent == 1 and nothing is sent over to client
I am writing a kernel module that hooks some system calls (e.g. tcp_send() ) using jprobes and sends some information to the userspace using netlink sockets.
netlink_unicast(nlsk, skb, pid, MSG_DONTWAIT);
my callback call is:
void nl_recv(struct sk_buff *skb) {
struct nlmsghdr *nlh;
if (skb == NULL) {
return;
}
nlh = (struct nlmsghdr *) skb->data;
pid = nlh->nlmsg_pid;
debug(KERN_NOTICE "Kernel Module: Received pid from %u\n", pid);
}
I'd like to pause the execution of my kernel module after every send. relaunch on receive.
I have tried using completions and wait queues, but it seems that they push the session into a GPF.
I am trying to set up a socket to receive multicast UDP packets on VxWorks 6.8.
sin.sin_len = (u_char)sizeof (sin);
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = INADDR_ANY;
/* UDP port number to match for the received packets */
sin.sin_port = htons (mcastPort);
/* bind a port number to the socket */
if (bind(sockDesc, (struct sockaddr *)&sin, sizeof(sin)) != 0)
{
perror("bind");
status = errno;
goto cleanUp;
}
/* fill in the argument structure to join the multicast group */
/* initialize the multicast address to join */
ipMreq.imr_multiaddr.s_addr = inet_addr (mcastAddr);
/* unicast interface addr from which to receive the multicast packets */
ipMreq.imr_interface.s_addr = inet_addr (ifAddr);
printf ("Interface address on which to receive multicast packets: %s\n", ifAddr);
/* set the socket option to join the MULTICAST group */
int code = setsockopt (sockDesc, IPPROTO_IP, IP_ADD_MEMBERSHIP,
(char *)&ipMreq,
sizeof (ipMreq));
The setsockopt() call is returning -1 and errno is being set to 49 or EADDRNOTAVAIL. On wireshark, when we perform setsockopt I can see a properly formed group unsubscribe packet being sent out from the right port/interface. All different combinations of interfaces, ports, and multicast groups give the same result.
I am unable to debug very far into setsockopt as there doesnt seem to be anything wrong before the task calls ipcom_pipe_send and ipnet_usr_sock_pipe_recv, and after the recv call errno is set. I dont know how to debug the relevant tNetTask code that may be generating the error.
It could be that there's an issue with the interface index you supplied. Define ipMreq to be a struct ip_mreq, which does not have the imr_ifindex, instead of a struct ip_mreqn and remove the ipMreq.imr_ifindex = 2; line.
I'm writing a server application and I want to use IOCompletion ports, so I wrote a prototype for the server, but I'm facing a problem with GetQueuedCompletionStatus that it never returns(it blocks). Below is my code:
bool CreateSocketOverlappedServer()
{
WSADATA wsaData;
SOCKADDR_IN sockaddr;
if(WSAStartup(MAKEWORD(2,2,),&wsaData)){
_tprintf(_T("Unable to start up\n"));
return false;
}
SrvSocket = WSASocket(AF_INET,SOCK_STREAM,0,NULL,NULL,WSA_FLAG_OVERLAPPED);
if(SrvSocket==INVALID_SOCKET){
_tprintf(_T("Unable to start socket\n"));
return false;
}
sockaddr.sin_family = AF_INET;
sockaddr.sin_port = htons(10000);
sockaddr.sin_addr.s_addr = INADDR_ANY;
/* now bind the socket */
if(bind(SrvSocket, (SOCKADDR *)&sockaddr, sizeof(SOCKADDR_IN))==SOCKET_ERROR){
_tprintf(_T("Unable to bind socket\n"));
return false;
}
if(listen(SrvSocket, 5)==SOCKET_ERROR){
_tprintf(_T("Error listening\n"));
return false;
}
return true;
}
void WorkerThread(void *arg)
{
bool bret= false;
DWORD dwTransferedBytes=0;
CLIENTS *client;
PPER_IO_OPERATION_DATA data;
/* Just sleep for now */
while(true){
_tprintf(_T("Entering while\n"));
bret = GetQueuedCompletionStatus(hIocp,&dwTransferedBytes,(PULONG_PTR)&client,(LPOVERLAPPED *) &data,INFINITE);
if(!bret){
_tprintf(_T("Unable to process completion port\n"));
}
}
//Sleep(10000);
}
void AcceptClientConnections(void *arg)
{
SOCKET ClientSocket;
CLIENTS *c;
_tprintf(_T("Start accepting client connections\n"));
while(true){
ClientSocket = accept(SrvSocket, NULL,NULL);
if(ClientSocket==INVALID_SOCKET){
_tprintf(_T("Unable to accept connection\n"));
continue;
}
/* do an association with completion port */
c = (CLIENTS *)malloc(sizeof(CLIENTS));
c->sock = ClientSocket;
/* associate with completion port */
if(!CreateIoCompletionPort((HANDLE)ClientSocket, hIocp, (ULONG_PTR)c,0)){
_tprintf(_T("Unable to associate with completion port\n: %d"),GetLastError());
}
}
}
Any idea?
thanks in advance.
You are not using the Completion Port correctly, so it has nothing to do, and thus no status to report. Using a Completion Port with sockets is a two-step process, but you are only doing half of the steps.
Read the following MSDN article for details:
Windows Sockets 2.0: Write Scalable Winsock Apps Using Completion Ports
ONe possibility: Check if you have associated the listener socket with the completion port (made this mistake myself). If you haven't the GetQueuedCompletionStatus() will block forever.
I am working in networking reliability simulation, I need to simulate packet dropping based on a quality of service percentage. Currently I have a DLL that hooks into send, sendto, recv and recvfrom. My hooks then 'drop' packets based on the quality of service.
I just need to apply the hook to UDP packets, and not disturb TCP (TCP is used for remote debugging).
Is there a way that I can query WinSock for the protocol that a socket is bound to?
int WSAAPI HookedSend(SOCKET s, const char FAR * buf, int len, int flags)
{
//if(s is UDP)
//Drop according to QOS
else
//Send TCP packets undisturbed
return send(s, buf, len, flags);
}
I think you could get the socket type by using getsockopt:
int optVal;
int optLen = sizeof(int);
getsockopt(socket,
SOL_SOCKET,
SO_TYPE,
(char*)&optVal,
&optLen);
if(optVal = SOCK_STREAM)
printf("This is a TCP socket.\n");
else if(optVal = SOCK_DGRAM)
printf("This is a UTP socket.\n");
else
printf("Error");