sendto not working on VxWorks - sockets

I asked this question before and had no resolution (still having the problem). I am stumped because the function returned without error and NO DATA was sent! This code works on Linux ... the VxWorks version does not work (sendto does not send, though it returns without an ERROR).
The synopsis - I am writing a simple echo server - The server successfully receives
the data (from an x86 box) and claims it successfully SENT it back.
However NO DATA is received on the client (netcat on an x86). This
code is running on VxWorks 5.4 on a PowerPC box ...
I is the UDP data being buffered somehow?
Could another task be preventing sendto from sending? (NOT to get off on a wild goose chase here, but I taskspawn my application with a normal priority, i.e. below critical tasks like the network task etc etc ... so this is fine).
Could VxWorks be buffering my UDP data?
I HAVE setup my routing table ... pinging works!
There is NO firewall AFAIK ...
What are the nuances of sendto and what would prevent my data from
reaching the client ...
while(1)
{
readlen = recvfrom(sock, buf, BUFLEN, 0, (struct sockaddr *) &client_address, &slen);
if (readlen == ERROR)
{
printf("RECVFROM FAILED()/n");
return (ERROR);
}
printf("Received %d bytes FROM %s:%d\nData: %s\n\n",
readlen, inet_ntoa(client_address.sin_addr),
ntohs(client_address.sin_port), buf);
// Send it to right back to the client using the open UDP socket
// but send it to OUTPORT
client_address.sin_port = htons(OUTPORT);
// Remember slen is a value (not an address ... in, NOT in-out)
sendlen = sendto(sock, buf, BUFLEN, 0, (struct sockaddr*)&client_address, slen);
// more code ....
}

I trust ERROR is defined as -1, right? Then are you checking the return value of the sendto(2) call? What about the errno(3) value?
One obvious problem I see in the code is that you give BUFLEN as length of the message to be sent, while it should actually be readlen - the number of bytes you received.

Related

Unix Domain Sockets datagram client with receive only

I have a simulator application which Unix Domain datagram sockets, which sends data to socket path for.ex /var/lib/XYZ.
sendto is returning -2 which is due to other end no peer is there(no other unix domian socket application is running)
I would like to write a datagram client/peer application using Unix Domain Sockets for receiving data from the server/simulator(which is sending data to /var/lib/XYZ).
My code is as follows:
#define BUF_SIZE 1024
#define SV_SOCK_PATH "/var/lib/XYZ"
#define SV_SOCK_PATH2 "/var/lib/ABC"
creation of Unix Domain sockets as below:
struct sockaddr_un svaddr, claddr;
....
sfd = socket(AF_UNIX, SOCK_DGRAM, 0);
if (sfd == -1)
printf("socket creation failed");
memset(&claddr, 0, sizeof(struct sockaddr_un));
claddr.sun_family = AF_UNIX;
strncpy(claddr.sun_path, SV_SOCK_PATH2, sizeof(claddr.sun_path) - 1);
if (bind(sfd, (struct sockaddr *) &claddr, sizeof(struct sockaddr_un)) == -1)
printf("bind failed");
/* Construct address of server */
memset(&svaddr, 0, sizeof(struct sockaddr_un));
svaddr.sun_family = AF_UNIX;
strncpy(svaddr.sun_path, SV_SOCK_PATH, sizeof(svaddr.sun_path) - 1);
while(1)
{
int len=sizeof(struct sockaddr_un);
numBytes = recvfrom(sfd, resp, BUF_SIZE, 0, (struct sockaddr*)&svaddr,&len);
if (numBytes == -1)
printf("recvfrom error");
else{
printf("no of bytes received from server: %d",(int)numBytes);
printf("Response %d: %s\n", (int) numBytes, resp);
}
}
remove(claddr.sun_path);
//exit(EXIT_SUCCESS);
}
but the program is not receiving anything...is there anything missed out??
When it comes to datagrams, there is no real client or server. Either side attempting to send is responsible for addressing datagrams to the other. So, in your code, the setup is all wrong. You're apparently attempting to direct the "server" (but really not a server, just the other peer) to send to you via svaddr but that isn't how it works.
For a datagram AF_UNIX socket, the sender either needs to explicitly specify the receiver's address in a sendto call, or it needs to first connect its socket to the receiver's address. (In the latter case, it can then use send instead of sendto since the peer address has been specified via connect.)
You can't specify the sending peer's address in the recvfrom call. The socket address argument in the recvfrom is intended to return to you the address from which the datagram was sent. Whatever is in that variable will be overwritten on successful return from recvfrom.
One way datagram peer programs are often structured: the "server" creates a well-known path and binds to it, then a "client" creates its own endpoint and binds to it (constructing a unique socket address for itself), then the client can sendto the server's well-known socket. The server, by using recvfrom to obtain the client's address along with the datagram, can then use sendto along with the address to return a message to the client (without needing to connect its socket). This provides a sort of client-server paradigm on top of the fundamentally equal-peer orientation of the datagram socket.
Finally, I should mention that it's usually a good idea to use fully specified pathnames to ensure both peers are using the same address even if started from different directories. (Normally, with AF_UNIX, the address is a path name in the file system used to "rendezvous" between the two peers -- so without a full path "some_socket" is "./some_socket" in the current working directory. Some systems, such as linux, also support an abstract "hidden" namespace that doesn't require a full path, but you must use an initial null byte in the name to specify that.)

sendto() fails for Custom UDP Protocol, using Seagull Protocol Traffic Generator

I'm brand new to working with networking, protocols, and sockets but this issue has been bothering me for a few days and I just cannot seem to find a solution. I am using Seagull, an open source multi-protocol traffic generator (source code), to create a client with a custom UDP. The unfortunate thing is that no one really keeps up with this software anymore and other people have had this problem but there are no solutions and it may be a bug in the generator software. I was able to write the XML scripts needed to run the traffic generator and when I ran the client on the local loopback (127.0.0.1) the generator worked fine and I was able to collect the packets and analyze them with Wireshark and they contained the correct data.
I'm now trying to use this client to send messages to a server on my local network (192.x.x.x) but Seagull keeps failing to send the messages. It's not a network issue because I've been able to ping the address with no packet loss. I've traced the source of the error back to the sendto() function, which keeps failing due to an invalid argument. I've stepped through the code using GDB when the destination was set to both the local loopback and the other IP and the arguments passed to the sendto() function were the exact same with the exception of different IP addresses in the sockaddr struct, which is of course expected. However, when I look at the registers after the system call in sendto() for the the one that contains the value for the length of the message length turns negative partway through the call and that is the value that is returned from the function call- this does not happen with the local loopback network. Here is the section of code that calls sendto() and fails:
size_t C_SocketWithData::send_buffer(unsigned char *P_data,
size_t P_size){
// T_SockAddrStorage *P_remote_sockaddr,
// tool_socklen_t *P_len_remote_sockaddr) {
size_t L_size = 0 ;
int L_rc ;
if (m_write_buf_size != 0) {
// Try to send pending data
if (m_type == E_SOCKET_TCP_MODE) {
L_rc = _call_write(m_write_buf, m_write_buf_size) ;
} else {
L_rc = _write(m_write_buf, m_write_buf_size) ;
}
if (L_rc < 0) {
SOCKET_ERROR(0,
"send failed [" << L_rc << "] [" << strerror(errno) << "]");
switch (errno) {
case EAGAIN:
SOCKET_ERROR(0, "Flow control not implemented");
break ;
case ECONNRESET:
break ;
default:
SOCKET_ERROR(0, "process error [" << errno << "] not implemented");
break ;
}
return(0);
where _write() is a wrapper for sendto().
I'm not really sure what is going on that causes it to do this and I've spent hours looking through the source code and tracing what is going on but everything seems normal up until the buffer length is modified in the system call. I've looked at the socket() initialization, binding, and other functions but everything seems fine. If anyone has any experience with Seagull or this problem, please let me know if you have had any suggestions. I've looked through almost every sendto() related question on this website and have not found a solution.
I am running the client on Ubuntu v 14.04 through a VM (Virtualbox) on a Windows 10 host, where I'm trying to send the messages. Thanks in advance!
I figured out the answer to this after days of debugging and looking through source code and I want to update this in case any poor soul in the future has the same problem. The original Seagull implementation always tries to bind the socket before calling send/sendto. In this case, since sendto automatically binds the socket I was able to remove the binding for this case.
Original implementation in C_SocketClient::_open (C_Socket.cpp Line 666):
} else {
L_rc = call_bind(m_socket_id,
(sockaddr *)(void *)&(m_remote_addr_info->m_addr_src),
SOCKADDR_IN_SIZE(&(m_remote_addr_info->m_addr_src)));
Edited Version:
} else {
L_rc = call_bind(m_socket_id,
/* UDP Does not need to bind first */
if(m_type != E_SOCKET_UDP_MODE) {
L_rc = call_bind(m_socket_id,
(sockaddr *)(void *)&(m_remote_addr_info->m_addr_src),
SOCKADDR_IN_SIZE(&(m_remote_addr_info->m_addr_src)));
} else {
L_rc = 0;
}
Now Seagull works and I am able to send my custom protocol! I opened a pull request for the original source code so that this can possibly be fixed.

using socket package for octave on ubuntu

I am trying to use the sockets package for Octave on my Ubuntu. I am using the Java Sockets API for connecting to Octave. The Java program is the client, Octave is my server. I just tried your code example:
http://pauldreik.blogspot.de/2009/04/octave-sockets-example.html
There are two problems:
1.)
Using SOCK_STREAM, for some strange reason, certain bytes are being received by recv() right after accept(), even if I'm not sending anything from the client. Subsequent messages I send with Java have no effect, it seems the Octave socket completely has its own idea about what it thinks it receives, regardless of what I'm actually sending.
2.)
Using SOCK_DGRAM, there is another problem:
I do get a reception of my actual message this way, but it seems that a recv() command doesn't remove the first element from the datagram queue. Until I send the second datagram to the socket, any subsequent recv() commands will repeatedly read the first datagram as if it were still in the queue. So the recv() function doesn't even block to wait for an actually new available datagram. Instead, it simply reads the same old one again. This is useless, since I cannot tell my server to wait for news from the client.
Is this how UDP is supposed to behave? I thought datagram packets are really removed from the datagram queue by recv().
This is my server side code:
s=socket(AF_INET, SOCK_DGRAM, 0);
bind(s,12345);
[config,count] = recv(s, 10)
[test,count] = recv(s, 4)
And this is my Java client:
public LiveSeparationClient(String host, int port, byte channels, byte sampleSize, int sampleRate, int millisecondsPerFrame) throws UnknownHostException, IOException {
this.port = port;
socket = new DatagramSocket();
this.host = InetAddress.getByName(host);
DatagramPacket packet = new DatagramPacket(ByteBuffer.allocate(10)
.put(new byte[]{channels, sampleSize})
.putInt(sampleRate)
.putInt(millisecondsPerFrame)
.array(), 10, this.host, port
);
socket.send(packet);
samplesPerFrame = (int) Math.floor((double)millisecondsPerFrame / 1000.0 * (double)sampleRate);
}
As you see, I'm sending 10 Bytes and receiving all 10 (this works so far) with recv(s, 10). In the later part of my Java program, packets will be generated and send also, but this may take some seconds. In the mean time, the second receive, recv(s, 4), in Octave should wait for a really new datagram package. But this doesn't happen, is simply reads the first 4 Bytes of the same old package again. recv() doesn't block the second time.
I hope it is not a problem for you to fix this?
Thanks in advance :-)
Marvin
P.S.: Also, I don't undertstand why listen() and accept() are both necessary when using SOCK_STREAM, but not for SOCK_DGRAM.

Socket programming Client Connect

I am working with client-server programming I am referring this link and my server is successfully running.
I need to send data continuously to the server.
I don't want to connect() every time before sending each packet. So for first time I just created a socket and send the first packet, the rest of the data I just used write() function to write data to the socket.
But my problem is while sending data continuously if the server is not there or my Ethernet is disabled still it successfully write data to socket.
Is there any method by which I can create socket only at once and send data continuously with knowing server failure?.
The main reason for doing like this that, on the server side I am using GPRS modem and on each time when call connect() function for each packet the modem get hanged.
For creating socket I using below code
Gprs_sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (Gprs_sockfd < 0)
{
Display("ERROR opening socket");
return 0;
}
server = gethostbyname((const char*)ip_address);
if (server == NULL)
{
Display("ERROR, no such host");
return 0;
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,(char *)&serv_addr.sin_addr.s_addr,server->h_length);
serv_addr.sin_port = htons(portno);
if (connect(Gprs_sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
{
Display("ERROR connecting");
return 0;
}
And each time I writing to the socket using the below code
n = write(Gprs_sockfd,data,length);
if(n<0)
{
Display("ERROR writing to socket");
return 0;
}
Thanks in advance.............
TCP was designed to tolerate temporary failures. It does byte sequencing, acknowledgments, and, if necessary, retransmissions. All unacknowledged data is buffered inside the kernel network stack. If I remember correctly the default is three re-transmission attempts (somebody correct me if I'm wrong) with exponential back-off timeouts. That quickly adds up to dozens of seconds, if not minutes.
My suggestion would be to design application-level acknowledgments into your protocol, meaning the server would send a short reply saying that it received that much data up to now, say every second. If the client does not receive suck ack in say 3 seconds, the client knows the connection is unusable and can close it. By the way, this is easier done with non-blocking sockets and polling functions like select(2) or poll(2).
Edit 0:
I think this would be very relevant here - "The ultimate SO_LINGER page, or: why is my tcp not reliable".
Nikolai is correct here, the behaviour you experience here is desirable as basically you could continue transfering data after network outage without any logic in your application. If your application should detect outages longer that specified amount of time, you need to add heartbeating into your protocol. This is standard way of solving the problem. It can also allow you for detect situation when network is all right, receiver is alive, but it has deadlocked (due to to a software bug).
Heartbeating could be as simple as mentioned by Nikolai -- sending a small packet every X seconds; if the server can't see the packet for N*X seconds, the connection would be dropped.

When the send operation is finished on non blocking socket?

Let's suppose that sock is an unix socket opened in non-blocking mode and following function
void send_int(int sock, int flags) {
int x = 0xff;
send(sock, &x, sizeof(int), flags);
}
Is this code "correct"? I'm not sure whether the buffer (x) is copied into some sending buffer before send returns, or there is a chance that send and send_int return too early and then non-existing buffer is used as it was only on stack...
No it is not necessary to preserve the user send buffer till the send operation is completed in non-blocking mode. So your code is fine.
Internally, the send buffer is copied to the Socket Buffer (SKB) send queue and over to the kernel space.
References:
The send manpage does not mention of such a need
Dave Miller's How SKB's work