I want to simulate a simple continuous client-server request-response behaviour; i.e. the client sends a packet to server, then server receives the packet and responds to the client, then client receives the response packet and it again sends out a new packet to server and so on. I have figured out to send one round of communication (client->server->client) but don't know how to continue this. This is my code to achieve one round:
UdpEchoServerHelper echoServer (9);
ApplicationContainer serverApps = echoServer.Install (wifiApNode.Get (nWifiAp - 1));
serverApps.Start (Seconds (1.0));
serverApps.Stop (Seconds (10.0));
UdpEchoClientHelper echoClient (apDevicesInterfaces.GetAddress (nWifiAp - 1), 9);
echoClient.SetAttribute ("MaxPackets", UintegerValue (1));
echoClient.SetAttribute ("Interval", TimeValue (Seconds (1.0)));
echoClient.SetAttribute ("PacketSize", UintegerValue (1024));
ApplicationContainer clientApps;
clientApps = echoClient.Install (wifiStaNodes.Get (nWifiSta - 1));
clientApps.Start (Seconds (2.0));
clientApps.Stop (Seconds (10.0));
If I set any other int than 1 in echoClient.SetAttribute ("MaxPackets", UintegerValue (1));, I am able to have that many rounds but the problem is that they all start at 1 second gap (due to this: echoClient.SetAttribute ("Interval", TimeValue (Seconds (1.0)));). I want that client starts sending out as soon as it receives response from the server and not after waiting for 1 second.
You will need to modify the echoClient application and in particularly the 'HandleRead' method that is respnsible for the reception of packets.
Currently it only prints that one is received. Take a look at the UdpEchoServer application, where the HandleRead is generating the response.
Related
OpenSips provides various timeout for configuration: http://www.opensips.org/html/docs/modules/1.8.x/tm.html
How to measure the time (ring duration) between receiving an INVITE and 200 OK? Is there a special function?
I was able to solve this using the $Ts core variable.
i) Record the initial timestamp:
$dlg_val(inviteStartTimestamp) = $Ts;
ii) When 200 OK is received in the reply route, find the time difference in seconds:
$var(ringDurationSec) = $Ts - $dlg_val(inviteStartTimestamp{s.int});
I am implementing a server in which i listen for the client to connect using the accept socket call.
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
The send calls to the client fails with errno = 32 i.e broken pipe.
Since i don't control the client, i have set socket option *SO_KEEPALIVE* in the accepted socket.
const int keepAlive = 1;
acceptsock = accept(sock, (struct sockaddr*)&client_addr, &client_addr_length)
if (setsockopt( acceptsock, SOL_SOCKET, SO_KEEPALIVE, &keepAlive, sizeof(keepAlive)) < 0 )
{
print(" SO_KEEPALIVE fails");
}
Could anyone please tell what may be going wrong here and how can we prevent the client socket from closing ?
NOTE
One thing that i want to add here is that if there is no time gap or less than 5 seconds between the accept and send/recv calls, the client server communication occurs as expected.
connect(2) and send(2) are two separate system calls the client makes. The first initiates TCP three-way handshake, the second actually queues application data for transmission.
On the server side though, you can start send(2)-ing data to the connected socket immediately after successful accept(2) (i.e. don't forget to check acceptsock against -1).
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
Why? Do you mean that the client takes that long to send the data? or that you just futz around in the server for 10-15s between accept() and recv(), and if so why?
The send calls to the client fails with errno = 32 i.e broken pipe.
So the client has closed the connection.
Since I don't control the client, i have set socket option SO_KEEPALIVE in the accepted socket.
That won't stop the client closing the connection.
Could anyone please tell what may be going wrong here
The client is closing the connection.
and how can we prevent the client socket from closing ?
You can't.
I have two applications communicating via TCP sockets. First one receives and the second sends.
First app:
start=clock();
recv();
end=clock();
when i run the application, (end-start) is 150-200 msecs.(always)
Second app:
while (!stop) {
start=clock();
prepare_message();
send();
end=clock();
}
when i run the application, (end-start) is 0.00 msecs. (always)
Message payload is nearly 200-300 bytes and the ping duration is <1ms. So, why does the receiver wait for 200ms while the sender does not wait?
So how can i describe the 200msecs?
Thanks
Sender is sending the message whenever it is ready. The receiver has to wait for the message, and this is where this extra time may come from. How do you ensure that recv() is called after the message has been send? If you don't, then recv() is most likely waiting for input, while the sender hasn't yet reach this part of the code.
Another thing is that depending on the methods you are using, the sender only saves the message in a buffer, as TCP can wait for more data to combine it in a single package. You should use TCP_NODELAY option to avoid that.
I am working with client-server programming I am referring this link and my server is successfully running.
I need to send data continuously to the server.
I don't want to connect() every time before sending each packet. So for first time I just created a socket and send the first packet, the rest of the data I just used write() function to write data to the socket.
But my problem is while sending data continuously if the server is not there or my Ethernet is disabled still it successfully write data to socket.
Is there any method by which I can create socket only at once and send data continuously with knowing server failure?.
The main reason for doing like this that, on the server side I am using GPRS modem and on each time when call connect() function for each packet the modem get hanged.
For creating socket I using below code
Gprs_sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (Gprs_sockfd < 0)
{
Display("ERROR opening socket");
return 0;
}
server = gethostbyname((const char*)ip_address);
if (server == NULL)
{
Display("ERROR, no such host");
return 0;
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,(char *)&serv_addr.sin_addr.s_addr,server->h_length);
serv_addr.sin_port = htons(portno);
if (connect(Gprs_sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
{
Display("ERROR connecting");
return 0;
}
And each time I writing to the socket using the below code
n = write(Gprs_sockfd,data,length);
if(n<0)
{
Display("ERROR writing to socket");
return 0;
}
Thanks in advance.............
TCP was designed to tolerate temporary failures. It does byte sequencing, acknowledgments, and, if necessary, retransmissions. All unacknowledged data is buffered inside the kernel network stack. If I remember correctly the default is three re-transmission attempts (somebody correct me if I'm wrong) with exponential back-off timeouts. That quickly adds up to dozens of seconds, if not minutes.
My suggestion would be to design application-level acknowledgments into your protocol, meaning the server would send a short reply saying that it received that much data up to now, say every second. If the client does not receive suck ack in say 3 seconds, the client knows the connection is unusable and can close it. By the way, this is easier done with non-blocking sockets and polling functions like select(2) or poll(2).
Edit 0:
I think this would be very relevant here - "The ultimate SO_LINGER page, or: why is my tcp not reliable".
Nikolai is correct here, the behaviour you experience here is desirable as basically you could continue transfering data after network outage without any logic in your application. If your application should detect outages longer that specified amount of time, you need to add heartbeating into your protocol. This is standard way of solving the problem. It can also allow you for detect situation when network is all right, receiver is alive, but it has deadlocked (due to to a software bug).
Heartbeating could be as simple as mentioned by Nikolai -- sending a small packet every X seconds; if the server can't see the packet for N*X seconds, the connection would be dropped.
I starts learning TCP protocol from internet and having some experiments. After I read an article from http://www.diffen.com/difference/TCP_vs_UDP
"TCP is more reliable since it manages message acknowledgment and retransmissions in case of lost parts. Thus there is absolutely no missing data."
Then I do my experiment, I write a block of code with TCP socket:
while( ! EOF (file))
{
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
I think it's good because "TCP is reliable" and it "retransmissions lost parts"... But it's not good at all. A small file is OK but when it comes to about 2MB, sometimes it's OK but not always...
Now, I try another one:
while( ! EOF (file))
{
wait_for_ACK();//or sleep 5 seconds
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
It's good now...
All I can think of is that the 1st one fails because of:
1. buffer overflow on sender because the sending rate is slower than the writing rate of the program (the sending rate is controlled by TCP)
2. Maybe the sending rate is greater than writing rate but some packets are lost (after some retransmission, still fails and then TCP gives up...)
Any ideas?
Thanks.
TCP will ensure that you don't lose data but you should check how many bytes actually got accepted for transmission... the typical loop is
while (size > 0)
{
int sz = send(socket, bufptr, size, 0);
if (sz == -1) ... whoops, error ...
size -= sz; bufptr += sz;
}
when the send call accepts some data from your program then it's a job of the OS to get that to destination (including retransmission), but the buffer for sending may be smaller than the size you need to send, and that's why the resulting sz (number of bytes accepted for transmission) may be less than size.
It's also important to consider that sending is asynchronous, i.e. after the send function returns the data is not already at the destination, it's has been only assigned to the TCP transport system to be delivered. If you want to know when it will be received then you'll have to use other systems (e.g. a reply message from your counterpart).
You have to check write(socket) to make sure it writes what you ask.
Loop until you've sent everything or you've calculated a time out.
Do not use indefinite timeouts on socket read/write. You're asking for trouble if you do, especially on Windows.