UDP poll() .. issues with POLLOUT - sockets

I want to code a server with multiple clients with non-blocking UDP sockets and have an issue with turning a client socket into POLLOUT mode ...
A client first sends an initial datagram to the server and then only reads from server. Server shall broadcasts datagrams to multiple clients in nonblocking way. So I have an array of
struct pollfd clients_polled[MAX_NUMBER_OF_CLIENTS + 1]
Then initialize it this way
/* init client_polled array */
for (i = 0; i < MAX_NUMBER_OF_CLIENTS; ++i) {
clients_polled[i].fd = -1;
clients_polled[i].events = POLLIN;
clients_polled[i].revents = 0;
}
create listening socket
clients_polled[0].fd = socket(AF_INET, SOCK_DGRAM, 0);
then I bind it and call fcntl. Then I enter infinite loop in which I first call
poll_ret = poll(clients_polled, MAX_NUMBER_OF_CLIENTS, timeout);
and if there is POLLIN event on listetning socket I read it and add new client and then I send over some stuff to all active clients. So say the first client comes in so after read from it I want to set its event flag from POLLIN to POLLOUT so that the server can send to it in nonblocking way:
clients_polled[1].events = POLLOUT;
clients_polled[1].fd = ??
How shall I set .fd for it? Shall I assign it to the original clients_polled[0].fd or create a new socket like
clients_polled[0].fd = socket(AF_INET, SOCK_DGRAM, 0);
Either way I get .revent == 1 and nothing is sent over to client

Related

Can I use ZeroMQ sockets to change communication mechanism between two microservices that use REST?

How can we transform cleanly a communication based on HTTP API --> to a message communication using ZMQ library ?
In case you indeed want to do so, one may design a kind of Mediator, using ZeroMQ tools.
ZeroMQ has a set of multi-level abstractions, where AccessPoints, typically have a certain "behaviour" ( a distributed behaviour ) they perform among themselves.
Your indicated target aims at using no such behaviour, but to have some sort of transparent, (almost) wire-level handling of data-flows.
For this very purpose let me direct your kind attention first to the concept:
- ZeroMQ Hierarchy in Less than Five Seconds
and next to a possible tool, feasible to help in the given task:
-ZMQ_STREAM Scalable Formal Communication Archetype ( for an AccessPoint )
A socket of type ZMQ_STREAM is used to send and receive TCP data from a non-ØMQ peer, when using the tcp:// transport. A ZMQ_STREAM socket can act as client and/or server, sending and/or receiving TCP data asynchronously.
When receiving TCP data, a ZMQ_STREAM socket shall prepend a message part containing the identity of the originating peer to the message before passing it to the application. Messages received are fair-queued from among all connected peers.
When sending TCP data, a ZMQ_STREAM socket shall remove the first part of the message and use it to determine the identity of the peer the message shall be routed to, and unroutable messages shall cause an EHOSTUNREACH or EAGAIN error.
To open a connection to a server, use the zmq_connect call, and then fetch the socket identity using the ZMQ_IDENTITY zmq_getsockopt call.
To close a specific connection, send the identity frame followed by a zero-length message (see EXAMPLE section).
When a connection is made, a zero-length message will be received by the application. Similarly, when the peer disconnects (or the connection is lost), a zero-length message will be received by the application.
You must send one identity frame followed by one data frame. The ZMQ_SNDMORE flag is required for identity frames but is ignored on data frames.
Example:
/* Create Context-Engine */
void *ctx = zmq_ctx_new (); assert (ctx);
/* Create ZMQ_STREAM socket */
void *socket = zmq_socket (ctx, ZMQ_STREAM); assert (socket);
int rc = zmq_bind (socket, "tcp://*:8080"); assert (rc == 0);
/* Data structure to hold the ZMQ_STREAM ID */
uint8_t id [256];
size_t id_size = 256;
/* Data structure to hold the ZMQ_STREAM received data */
uint8_t raw [256];
size_t raw_size = 256;
while (1) {
/* Get HTTP request; ID frame and then request */
id_size = zmq_recv (socket, id, 256, 0); assert (id_size > 0);
do {
raw_size = zmq_recv (socket, raw, 256, 0); assert (raw_size >= 0);
} while (raw_size == 256);
/* Prepares the response */
char http_response [] =
"HTTP/1.0 200 OK\r\n"
"Content-Type: text/plain\r\n"
"\r\n"
"Hello, World!";
/* Sends the ID frame followed by the response */
zmq_send (socket, id, id_size, ZMQ_SNDMORE);
zmq_send (socket, http_response, strlen (http_response), 0);
/* Closes the connection by sending the ID frame followed by a zero response */
zmq_send (socket, id, id_size, ZMQ_SNDMORE);
zmq_send (socket, 0, 0, 0);
}
zmq_close (socket); zmq_ctx_destroy (ctx); /* Clean Close Sockets / Terminate Context */

BSD socket connect + select (client)

There must be something wrong in the below code but I don't seem to be able to use a client connect, non blocking in combination with a select statement. Please ignore the below lack of error handling.
I seem to have two issues
1. select blocks until timeout (60) if I try to connect port 80 on an internet server
2. trying to connect a existing or non existing port on 127.0.0.1 always instantly returns the select with no way to distinction between success or failure to connect.
What am I missing in my understanding of BSD nonblocking in combination with select?
fd_set readfds;
FD_ZERO(&readfds);
struct timeval tv;
tv.tv_sec = 60;
tv.tv_usec = 0;
struct sockaddr_in dest;
int socketFD = socket(AF_INET, SOCK_STREAM, 0);
memset(&dest, 0, sizeof(dest));
dest.sin_family = AF_INET;
dest.sin_addr.s_addr = inet_addr("127.0.0.1");
dest.sin_port = htons(9483);
long arg;
arg = fcntl(socketFD, F_GETFL, NULL);
arg |= O_NONBLOCK;
fcntl(socketFD, F_SETFL, arg);
if (connect(socketFD, (struct sockaddr *)&dest, sizeof(struct sockaddr))<0 && errno == EINPROGRESS) {
//now add it to the read set
FD_SET(socketFD, &readfds);
int res = select(socketFD+1, &readfds, NULL, NULL, &tv);
int error = errno;
if (res>0 && FD_ISSET(socketFD, &readfds)) {
NSLog(#"errno: %d", error); //Always 36
}
}
errno is set in your original attempt to connect -- legitimately: that is, it's in-progress. You then call select. Since select didn't fail, errno is not being reset. System calls only set errno on failure; they do not clear it on success.
The connect may have completed successfully. You aren't checking that though. You should add a call to getsockopt with SO_ERROR to determine whether it worked. This will return the error state on the socket.
One other important note. According to the manual page (https://www.freebsd.org/cgi/man.cgi?query=connect&sektion=2), you should be using the writefds to await completion of the connect. I don't know whether the readfds will correctly report the status.
[EINPROGRESS] The socket is non-blocking and the connection cannot
be completed immediately. It is possible to select(2)
for completion by selecting the socket for writing.
See also this very similar question. Using select() for non-blocking sockets to connect always returns 1

Winsock TCP connection, send fine but recv firewall blocked

I have an application that sends a GET request using winsock on port 80 using a TCP socket. A few users have reported an issue where no response is received, looking at network logs and seeing the network device is getting the data just the app isn't it was clear that the firewall was blocking it.
Having disabled the firewall it then worked fine but what I don't understand is why it was getting blocked. The connection is created from the users computer, it connects fine and sends (which I assumes automatically opens a port) so how can data be lost on the same connection when received? Should I be providing additional winsock settings? Or is there simply no way around stopping the firewall blocking an already active connection?
Here is a stripped down version of the winsock code
SOCKET sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (sock == INVALID_SOCKET)
return -1;
struct sockaddr_in client;
memset(&client, 0, sizeof(client));
client.sin_family = AF_INET;
client.sin_port = htons(80);
client.sin_addr.s_addr = inet_addr(inet_ntoa(*addr_list[0]));
if (connect(sock, (struct sockaddr *)&client, sizeof(client)) < 0){
closesocket(sock);
return -1;
}
if (send(sock, buffer, buflength, 0) != buflength){
closesocket(sock);
return -1;
}
//get response
response = "";
int resp_leng = BUFFERSIZE;
while (resp_leng == BUFFERSIZE)
{
resp_leng = recv(sock, (char*)&buffer, BUFFERSIZE, 0);
if (resp_leng > 0)
response += std::string(buffer).substr(0, resp_leng);
else
return -1;
}
closesocket(sock);
Your while loop exits if a recv() returns less than BUFFERSIZE. This is wrong -- you must always assume that recv() can return any amount of data from 1 byte up to and including the supplied buffer size.

client-server code. How to bind the data connection to a specific port

I am trying to do the following:
Let us say I start a TCPServer on machine X. Now, I want to connect to the TCPServer from machine Y, but I want to specify the ports (both sender and receiver), on which the data communication should take place. Also, the TCPServer handles multiple clients at the same time.
MachineX: ./TCPServer
MachineY: ./TCPClient -SP 5000 -DP 5000
I have written the code for a multithreaded server (using C UNIX), and it works fine. Basically, it spawns one thread per connection. But I am not sure how to include the above functionality.
Thank you for your time!
Prior to calling connect(), call bind().
I'm assuming you had to do this for the server code, right? Otherwise, how do you get your server (running on MachineX) to listen on port 5000.
In any case, here's a C example of binding to localhost port 5000.
Example:
int sock = socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in addrRemote = {};
sockaddr_in addrLocal = {}; // zero init so that sin_addr is already INADDR_ANY;
int result;
addrLocal.sin_family = AF_INET;
addrLocal.sin_port = htons(5000);
result = bind(sock, (sockaddr*)&addrLocal, sizeof(addrLocal));
if (result < 0)
return;
addrRemote.sin_family = AF_INET;
addrRemote.sin_port = htons(5000);
addrRemote.sin_addr = <ip of MachineX in network byte order>;
result = connect(sock, (sockaddr*)&addrRemote, sizeof(addrRemote));
if (result < 0)
return;
It's assumed that TCPServer running on machine X is listening on port 5000.

How read the TCP packet of a stream using a socket in C?

let me first tell what I am trying to do.
I am trying to write a very simple proxy server.
I used the socket API to create a socket.
socket = socket(AF_INET, SOCK_STREAM, 0));
my proxy server worked fine until I tried it for a streaming data.
So what I did was my server socket listened to the requests and parsed them and then forwarded them to the actual server, I then used the read() call to read the packet & I blindly forward it back to the client.
For all html pages and images it works fine. but when I try to forward a streaming video I am not able to do it.
My socket always returns the application layer data (HTTP packet) but in a streaming video only the first packet is http and rest all are just TCP packets. So I am able to forward only the first HTTP packet. When I try to read the other packets which contain data (which are all TCP) I don't get anything at the application layer (which is obvious as there is nothing at application layer in those packets ). So I am stuck and I do not know how to read those packets from TCP layer (I dont wanna use raw socket) and get my job done.
thanks in advance
You have to parse the packet header to know how much data to read from the socket. at first, use a ring buffer (a circular one!) for example the BSD sys/queue.h to order the received data from the stream.
The code below shows how to extract header_length, total_length, source and destination Address of an IPv4 packet in layer 3. refer to IPv4 packet layout to understand offsets:
typedef struct {
unsigned char version;
unsigned char header_length;
unsigned short total_length;
struct in_addr src;
struct in_addr dst;
} Packet;
int rb_packet_write_out(RingBuffer *b, int fd, int count) {
int i;
for (i = 0; i < count; i++) {
if (b->level < 20) {
return i;
}
Packet p;
unsigned char *start = b->blob + b->read_cursor;
unsigned char b1 = start[0];
p.version = b1 >> 4;
p.header_length = b1 & 0xf;
p.total_length = bigendian_deserialize_uint16(start + 2);
if (b->level < p.total_length) {
return i;
}
memcpy(&(p.src), start + 12, 4);
memcpy(&(p.dst), start + 16, 4);
char s[5], d[5];
inet_ntop(AF_INET, &(p.src), s, INET_ADDRSTRLEN);
inet_ntop(AF_INET, &(p.dst), d, INET_ADDRSTRLEN);
L_DEBUG("Packet: v%u %s -> %s (%u)", p.version, s, d, p.total_length);
}
return i;
}
If you use the socket API, then you are on the layer below HTTP, that is, to you everything is "just TCP". If the connection is stuck somewhere, it is most likely that something else is broken. Note there is no guarantee that the HTTP request or reply header will even fit in a single packet; they just usually do.
An HTTP 1.1 compliant streaming server will use "Content-Encoding: chunked" and report the length of each chunk rather than the length of the entire file, you should keep that in mind when proxying.
So what I did was my server socket
listened to the requests and parsed
them
Why? An HTTP proxy doesn't have to parse anything except the first line of the request, to know where to make the upstream connection to. Everything else is just copying bytes in both directions.