using socket package for octave on ubuntu - sockets

I am trying to use the sockets package for Octave on my Ubuntu. I am using the Java Sockets API for connecting to Octave. The Java program is the client, Octave is my server. I just tried your code example:
http://pauldreik.blogspot.de/2009/04/octave-sockets-example.html
There are two problems:
1.)
Using SOCK_STREAM, for some strange reason, certain bytes are being received by recv() right after accept(), even if I'm not sending anything from the client. Subsequent messages I send with Java have no effect, it seems the Octave socket completely has its own idea about what it thinks it receives, regardless of what I'm actually sending.
2.)
Using SOCK_DGRAM, there is another problem:
I do get a reception of my actual message this way, but it seems that a recv() command doesn't remove the first element from the datagram queue. Until I send the second datagram to the socket, any subsequent recv() commands will repeatedly read the first datagram as if it were still in the queue. So the recv() function doesn't even block to wait for an actually new available datagram. Instead, it simply reads the same old one again. This is useless, since I cannot tell my server to wait for news from the client.
Is this how UDP is supposed to behave? I thought datagram packets are really removed from the datagram queue by recv().
This is my server side code:
s=socket(AF_INET, SOCK_DGRAM, 0);
bind(s,12345);
[config,count] = recv(s, 10)
[test,count] = recv(s, 4)
And this is my Java client:
public LiveSeparationClient(String host, int port, byte channels, byte sampleSize, int sampleRate, int millisecondsPerFrame) throws UnknownHostException, IOException {
this.port = port;
socket = new DatagramSocket();
this.host = InetAddress.getByName(host);
DatagramPacket packet = new DatagramPacket(ByteBuffer.allocate(10)
.put(new byte[]{channels, sampleSize})
.putInt(sampleRate)
.putInt(millisecondsPerFrame)
.array(), 10, this.host, port
);
socket.send(packet);
samplesPerFrame = (int) Math.floor((double)millisecondsPerFrame / 1000.0 * (double)sampleRate);
}
As you see, I'm sending 10 Bytes and receiving all 10 (this works so far) with recv(s, 10). In the later part of my Java program, packets will be generated and send also, but this may take some seconds. In the mean time, the second receive, recv(s, 4), in Octave should wait for a really new datagram package. But this doesn't happen, is simply reads the first 4 Bytes of the same old package again. recv() doesn't block the second time.
I hope it is not a problem for you to fix this?
Thanks in advance :-)
Marvin
P.S.: Also, I don't undertstand why listen() and accept() are both necessary when using SOCK_STREAM, but not for SOCK_DGRAM.

Related

How does socket recv function detects end of message

Look at this small basic python programs:
import socket
tcpsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
tcpsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
tcpsock.bind(("", 10000))
tcpsock.listen(10)
(sock, (ip, port)) = tcpsock.accept()
s = sock.recv(1024)
print(s)
Second program:
import socket
import time
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 10000))
time.sleep(1)
sock.sendall(b'hello world')
The first program is a socket server. It recv a message through the socket and display it on the console. The second program is a client which connects to the server and sends it a message.
As you can see, the server reads a 1024 bytes max length message. My client send a few bytes.
My question is: How does the server knows the message ends after the 'd' char ?
I am working with sockets since years and i have always implemented a delimiter mechanism in order to know when the message stops.
But it seems to work automaticly. My question is: How ?
I know TCP car fragment messages. So what's happen if the paquet is trucated in the middle of my message ? Is it managed by OS ?
Thanks
How does the server knows the message ends after the 'd' char ?
It does not. There is not even a concept of a message in TCP. recv simply returns what is there: it blocks if no data are available and returns what can be read up to the given size if data are available. "Data available" means that there are data in the sockets receive buffer, which are put by the OS kernel there. In other words: recv will not block until the requested number of bytes can be returned but it will already return when at least a single byte is in the sockets receive buffer.
For example if the client would do two send or sendall shortly after each other a single recv might return both "messages" together. This can be easily triggered by deferring the recv (add some sleep before it) so that both "messages" are guaranteed to be arrived at the client.

Time Gap Between Socket Calls ie. Accept() and recv/send calls

I am implementing a server in which i listen for the client to connect using the accept socket call.
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
The send calls to the client fails with errno = 32 i.e broken pipe.
Since i don't control the client, i have set socket option *SO_KEEPALIVE* in the accepted socket.
const int keepAlive = 1;
acceptsock = accept(sock, (struct sockaddr*)&client_addr, &client_addr_length)
if (setsockopt( acceptsock, SOL_SOCKET, SO_KEEPALIVE, &keepAlive, sizeof(keepAlive)) < 0 )
{
print(" SO_KEEPALIVE fails");
}
Could anyone please tell what may be going wrong here and how can we prevent the client socket from closing ?
NOTE
One thing that i want to add here is that if there is no time gap or less than 5 seconds between the accept and send/recv calls, the client server communication occurs as expected.
connect(2) and send(2) are two separate system calls the client makes. The first initiates TCP three-way handshake, the second actually queues application data for transmission.
On the server side though, you can start send(2)-ing data to the connected socket immediately after successful accept(2) (i.e. don't forget to check acceptsock against -1).
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
Why? Do you mean that the client takes that long to send the data? or that you just futz around in the server for 10-15s between accept() and recv(), and if so why?
The send calls to the client fails with errno = 32 i.e broken pipe.
So the client has closed the connection.
Since I don't control the client, i have set socket option SO_KEEPALIVE in the accepted socket.
That won't stop the client closing the connection.
Could anyone please tell what may be going wrong here
The client is closing the connection.
and how can we prevent the client socket from closing ?
You can't.

Reproduce write-write-read delay with Java sockets

I have read that the combination of three things causes something like a 200ms delay with TCP: Nagle's algorithm, delayed acknowledgement, and the "write-write-read" combination. However, I cannot reproduce this delay with Java sockets and I am therefore not sure if I have understood correctly.
I am running a test on Windows 7 with Java 7 with two threads using sockets over the loopback address. I have not touched the tcpNoDelay option on any socket (false by default) nor played around with any TCP settings on the OS. The main piece of the code in the client is as below. The server is responding with a byte after each two bytes it receives from the client.
for (int i = 0; i < 100; i++) {
client.getOutputStream().write(1);
client.getOutputStream().write(2);
System.out.println(client.getInputStream().read());
}
I do not see any delay. Why not?
I believe you see delay acknowledgment.
You write 4 and 4 bytes to the socket. The server's TCP stack receives a segment (that probably contains at least 4 bytes from an int number) and wakes up the server application thread. This thread writes a byte back to the stream and this byte is sent to the client within ACK segment. I.e. TCP stack gives a chance to an application to send a reply immediately. So you see no delay.
You can write a dump of traffic and also make an experiment between two computers to see what really happens.

Socket programming Client Connect

I am working with client-server programming I am referring this link and my server is successfully running.
I need to send data continuously to the server.
I don't want to connect() every time before sending each packet. So for first time I just created a socket and send the first packet, the rest of the data I just used write() function to write data to the socket.
But my problem is while sending data continuously if the server is not there or my Ethernet is disabled still it successfully write data to socket.
Is there any method by which I can create socket only at once and send data continuously with knowing server failure?.
The main reason for doing like this that, on the server side I am using GPRS modem and on each time when call connect() function for each packet the modem get hanged.
For creating socket I using below code
Gprs_sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (Gprs_sockfd < 0)
{
Display("ERROR opening socket");
return 0;
}
server = gethostbyname((const char*)ip_address);
if (server == NULL)
{
Display("ERROR, no such host");
return 0;
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,(char *)&serv_addr.sin_addr.s_addr,server->h_length);
serv_addr.sin_port = htons(portno);
if (connect(Gprs_sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
{
Display("ERROR connecting");
return 0;
}
And each time I writing to the socket using the below code
n = write(Gprs_sockfd,data,length);
if(n<0)
{
Display("ERROR writing to socket");
return 0;
}
Thanks in advance.............
TCP was designed to tolerate temporary failures. It does byte sequencing, acknowledgments, and, if necessary, retransmissions. All unacknowledged data is buffered inside the kernel network stack. If I remember correctly the default is three re-transmission attempts (somebody correct me if I'm wrong) with exponential back-off timeouts. That quickly adds up to dozens of seconds, if not minutes.
My suggestion would be to design application-level acknowledgments into your protocol, meaning the server would send a short reply saying that it received that much data up to now, say every second. If the client does not receive suck ack in say 3 seconds, the client knows the connection is unusable and can close it. By the way, this is easier done with non-blocking sockets and polling functions like select(2) or poll(2).
Edit 0:
I think this would be very relevant here - "The ultimate SO_LINGER page, or: why is my tcp not reliable".
Nikolai is correct here, the behaviour you experience here is desirable as basically you could continue transfering data after network outage without any logic in your application. If your application should detect outages longer that specified amount of time, you need to add heartbeating into your protocol. This is standard way of solving the problem. It can also allow you for detect situation when network is all right, receiver is alive, but it has deadlocked (due to to a software bug).
Heartbeating could be as simple as mentioned by Nikolai -- sending a small packet every X seconds; if the server can't see the packet for N*X seconds, the connection would be dropped.

recv() returns 0

I have a very annoying problem that I found several times on other forums but I can't find a proper solution.
The problem is recv() returns 0 on the last few bytes of a connection. Here are some background information.
Both (client / server) applications run on the same machine.
Both (client / server) sockets are non-blocking
The transfered data size is 53 bytes.
Both (client / server) call shutdown and closesocket when the last send()/recv() was executed.
I also tried with SO_LINGER and 10 seconds, no success either
I call send() several times (small chunks) and from the client side 53 bytes are transfered.
The server calls recv() several times (4 byte requests) and read 49 bytes and then returns 0 (54 Bytes - 49 Bytes, so 4 bytes are missing).
MSDN and some forums write for non-blocking sockets:
recv() definitely returns < 0 on error and errno / WSAGetLastError is set
recv() definitely returns = 0 when the other side closed the connection
recv() definitely returns > 0 when data was read
MSDN also says:
Using the closesocket or shutdown functions with SD_SEND or SD_BOTH
results in a RELEASE signal being sent out on the control channel. Due
to ATM's use of separate signal and data channels, it is possible that
a RELEASE signal could reach the remote end before the last of the
data reaches its destination, resulting in a loss of that data. One
possible solutions is programming a sufficient delay between the last
data sent and the closesocket or shutdown function calls for an ATM
socket.
This is regarded in the example of recv() and send(): http://msdn.microsoft.com/en-us/library/windows/desktop/ms740121(v=vs.85).aspx
But still no success, I still get some interrupts in 10% of all connections after the 49 Byte is received, 90% of the connections succeed. Any ideas? Thx.
recv() returns 0 only when you request a 0-byte buffer or the other peer has gracefully disconnected. If you are not receiving all of the data you are expecting, then you are not reading the data correctly to begin with. Please update your question with your actual code.
My guess is that you're not really sending all the data you think your are sending. Check out:
The Ultimate SO_LINGER page
recv() definitely returns = 0 when the other side closed the connection
This is not completely true, in the following code using non-blocking winsock2 tcp, when no data is available, select returns 1 and recv returns 0, as does WSAGetLastError().
fd_set test = {1, socket};
const timeval timeout = {0, 0};
if (!::select(0, &test, nullptr, nullptr, &timeout)) return 0;
int done = ::recv(socket, buffer, 1, 0);
This continues even after the other end has called:
::shutdown(socket, SD_BOTH);
::closesocket(socket);
and then ended. Communication works as expected, it is just ::recv that seems to be "broken".