I am writing an application which is continuously sending and receiving data. My initial send/receive is running successfully but when I am expecting data of size 512 bytes in the recvfrom I get its return value as -1 which is "Resource temporarily unavailable." and errno is set to EAGAIN. If I use a blocking call i.e. without Timeout the application just hangs in recvfrom. Is there any max limit on recvfrom on iPhone? Below is the function which receives data from the server. I am unable to figure out what can be going wrong.
{ struct timeval tv;
tv.tv_sec = 3;
tv.tv_usec = 100000;
setsockopt (mSock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof tv);
NSLog(#"Receiving.. sock:%d",mSock);
recvBuff = (unsigned char *)malloc(1024);
if(recvBuff == NULL)
NSLog(#"Cannot allocate memory to recvBuff");
fromlen = sizeof(struct sockaddr_in);
n = recvfrom(mSock,recvBuff,1024,0,(struct sockaddr *)&from, &fromlen);
if (n == -1) {
[self error:#"Recv From"];
return;
}
else
{
NSLog(#"Recv Addr: %s Recv Port: %d",inet_ntoa(from.sin_addr), ntohs(from.sin_port));
strIPAddr = [[NSString alloc] initWithFormat:#"%s",inet_ntoa(from.sin_addr)];
portNumber = ntohs(from.sin_port);
lIPAddr = [KDefine StrIpToLong:strIPAddr];
write(1,recvBuff,n);
bcopy(recvBuff, data, n);
actualRecvBytes = n;
free(recvBuff);
}
}
Read the manpage:
If no messages are available at the socket, the receive call waits for a message to arrive, unless the socket is nonblocking (see fcntl(2)) in which case the value -1 is returned and the external variable errno set to EAGAIN.
I was writing a UDP application and think I came across a similar issue. Peter Hosey is correct in stating that the given result of recvfrom means that there is no data to be read; but you were wondering, how can there be no data?
If you are sending several UDP datagrams at a time from some host to your iphone, some of those datagrams may be discarded because the receive buffer size (on the iphone) is not large enough to accommodate that much data at once.
The robust way to fix the problem is to implement a feature that allows your application to request a retransmission of missing datagrams. A not as robust solution (that doesn't solve all the issues that the robust solution does) is to simply increase the receive buffer size using setsockopt(2).
The buffer size adjustment can be done as follows:
int rcvbuf_size = 128 * 1024; // That's 128Kb of buffer space.
if (setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF,
&rcvbuf_size, sizeof(rcvbuf_size)) == -1) {
// put your error handling here...
}
You may have to play around with buffer size to find what's optimal for your application.
For me it was a casting issue. Essentially a was assigning the returned value to an int instead of size_t
int rtn = recvfrom(sockfd,... // wrong
instead of:
size_t rtn = recvfrom(sockfd,...// correct
Related
I want to simulate a server that receives packets from multiple clients and process the data of these packets simultaneously together in NS-3. I have simulated one single server and client in NS-3 by modifying udp-echo-server and udp-echo-client applications in NS-3. Now, for implementing multiple clients, I modified the end lines of StartApplication function in udp-echo-server application as follows:
if((childpid=fork())==0)
{
m_socket->SetRecvCallback (MakeCallback(&UdpEchoServer::HandleRead, this));
m_socket6->SetRecvCallback (MakeCallback(&UdpEchoServer::HandleRead, this));
}
But it does not work. Actually, by connecting two clients, it just reads the first client and ignores the second client. It just runs StartApplication function once. Can anyone help me with this?
Thanks
The fundamental problem with what you're trying to do is that ns-3 is a single threaded simulator. You should not use fork to simulate forking. If you want multiple clients, you have to explicitly create them. I have quickly whipped up a simple example:
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
// simple udp multi-client, single-server simulation to answer
// https://stackoverflow.com/q/59632211/13040392
#include "ns3/core-module.h"
#include "ns3/internet-module.h"
#include "ns3/point-to-point-module.h"
#include "ns3/ipv4-global-routing-helper.h"
#include "ns3/applications-module.h"
#include "ns3/point-to-point-grid.h"
#include "ns3/flow-monitor-module.h"
using namespace ns3;
NS_LOG_COMPONENT_DEFINE("UdpMultiClient");
int
main(int argc, char *argv[]) {
// create grid structure of network
// not necessary. Could just create topology manually
PointToPointHelper p2pLink;
PointToPointGridHelper grid (2, 2, p2pLink);
InternetStackHelper stack;
grid.InstallStack(stack);
// assign IP addresses to NetDevices
grid.AssignIpv4Addresses (Ipv4AddressHelper ("10.1.1.0", "255.255.255.0"),
Ipv4AddressHelper ("10.2.1.0", "255.255.255.0"));
Ipv4GlobalRoutingHelper::PopulateRoutingTables();
// configure and install server app
int serverPort = 8080;
UdpEchoServerHelper serverApp (serverPort);
serverApp.Install(grid.GetNode(0,0));
Address serverAddress = InetSocketAddress(grid.GetIpv4Address(0,0), serverPort);
// configure and install client apps
UdpEchoClientHelper clientApp (serverAddress);
clientApp.Install(grid.GetNode(0,1));
clientApp.Install(grid.GetNode(1,0));
clientApp.Install(grid.GetNode(1,1));
// install FlowMonitor to collect simulation statistics
FlowMonitorHelper flowHelper;
Ptr<FlowMonitor> flowMonitor = flowHelper.InstallAll();
// configure and run simulation
Simulator::Stop(Seconds(10));
NS_LOG_UNCOND("Starting simulation.");
Simulator::Run();
Simulator::Destroy();
NS_LOG_UNCOND("Simulation completed.");
// simulation complete
// get statistics of simlation from FlowMonitor
flowMonitor->CheckForLostPackets();
std::map<FlowId, FlowMonitor::FlowStats> stats = flowMonitor->GetFlowStats();
uint64_t txPacketsum = 0;
uint64_t rxPacketsum = 0;
uint64_t DropPacketsum = 0;
uint64_t LostPacketsum = 0;
double Delaysum = 0;
for (std::map<FlowId, FlowMonitor::FlowStats>::const_iterator i = stats.begin(); i != stats.end(); ++i) {
txPacketsum += i->second.txPackets;
rxPacketsum += i->second.rxPackets;
LostPacketsum += i->second.lostPackets;
DropPacketsum += i->second.packetsDropped.size();
Delaysum += i->second.delaySum.GetSeconds();
}
NS_LOG_UNCOND(std::endl << " SIMULATION STATISTICS");
NS_LOG_UNCOND(" All Tx Packets: " << txPacketsum);
NS_LOG_UNCOND(" All Rx Packets: " << rxPacketsum);
NS_LOG_UNCOND(" All Delay: " << Delaysum / txPacketsum);
NS_LOG_UNCOND(" All Lost Packets: " << LostPacketsum);
NS_LOG_UNCOND(" All Drop Packets: " << DropPacketsum);
NS_LOG_UNCOND(" Packets Delivery Ratio: " << ((rxPacketsum * 100) / txPacketsum) << "%");
NS_LOG_UNCOND(" Packets Lost Ratio: " << ((LostPacketsum * 100) / txPacketsum) << "%");
// flowMonitor->SerializeToXmlFile("test.xml", true, true);
return 0;
}
As a quick note, in
UdpEchoClientHelper clientApp (serverAddress);
clientApp.Install(grid.GetNode(0,1));
clientApp.Install(grid.GetNode(1,0));
clientApp.Install(grid.GetNode(1,1));
we installed the UdpEchoClient on three Nodes. According to the documentation for this Application, UdpEchoClient sends a packet every 1000000000 ns = 1 s by default. Since we set the length of the simulation to 10 seconds using Simulator::Stop(Seconds(10));, we expect that each client will send 10 packets to the server. So, a total of 30 packets should be sent by clients. Also, since we are using UdpEchoServerHelper on the server, each packet will be echoed back by the server. Therefore, a total of 30 x 2 = 60 packets should be transmitted on the network.
The output of the script is
Starting simulation.
Simulation completed.
SIMULATION STATISTICS
All Tx Packets: 60
All Rx Packets: 60
All Delay: 0.0423177
All Lost Packets: 0
All Drop Packets: 0
Packets Delivery Ratio: 100%
Packets Lost Ratio: 0%
This answer actually demonstrates several features of ns-3, so feel free to ask any followup questions. I highly encourage you to check out the ns-3 documentation for classes you haven't encountered yet.
I am trying to determinate the different limits of the unix datagram sockets, as I am using it as IPC for my project.
The obscure thing I want to control is the size of my socket's internal buffer :
I want to know how many datagrams I can send before my socket would block.
I've understood that 2 differents limits affect the size of the socket's buffer :
/proc/sys/net/core/wmem_{max, default} sets the max (-default) size of a socket's writing buffer
/proc/sys/net/unix/max_dgram_qlen sets the maximum number of datagram the buffer can hold
I know that /proc/sys/net/core/rmem_{max, default} sets the max (-default) size of a socket's reading buffer but as I am working on local unix socket it doesn't seem to have a impact.
I have set wmem_{max, default} to 136314880 (130 MB) and max_dgram_qlen to 500000.
And wrote a small program where the sender socket only sends fixed size datagram to the receiver socket until is would block, I then print the size and number of datagram I was able to send.
Here is the code I used :
#include <err.h>
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <sys/un.h>
#include <unistd.h>
/* Payload size in bytes. */
#define PAYLOAD_SIZE 100
#define CALL_AND_CHECK(syscall) \
if (syscall < 0) { err(1, NULL); }
int main(void)
{
int receiver_socket_fd = socket(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0);
if (receiver_socket_fd < 0)
err(1, NULL);
char* binding_path = "test_socket";
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;
strncpy(addr.sun_path, binding_path, sizeof(addr.sun_path));
/* Check if the file exists, if yes delete it ! */
if (access(binding_path, F_OK) != -1) {
CALL_AND_CHECK(unlink(binding_path));
}
CALL_AND_CHECK(bind(receiver_socket_fd, (struct sockaddr const*) &addr, sizeof(addr)));
int sender_socket_fd = socket(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0);
if (sender_socket_fd < 0)
err(1, NULL);
CALL_AND_CHECK(connect(sender_socket_fd, (struct sockaddr const*) &addr, sizeof(addr)));
struct payload { char data[PAYLOAD_SIZE]; };
/* Create test payload with null bytes. */
struct payload test_payload;
memset(&test_payload.data, 0, PAYLOAD_SIZE);
ssize_t total_size_written = 0;
ssize_t size_written = 0;
do {
size_written = write(sender_socket_fd, (const void *) &test_payload, PAYLOAD_SIZE);
if (size_written > 0)
total_size_written += size_written;
} while (size_written > 0);
printf("socket_test: %zu bytes (%ld datagrams) were written before blocking, last error was :\n", total_size_written, total_size_written / PAYLOAD_SIZE);
perror(NULL);
CALL_AND_CHECK(unlink(binding_path));
CALL_AND_CHECK(close(sender_socket_fd));
CALL_AND_CHECK(close(receiver_socket_fd));
return 0;
}
I was expecting to reach either the max size in bytes of the socket (here 130MB) or the max number of datagram I set (500 000).
But the actual result is that I am only able to write 177494 datagrams before being blocked.
I can change the size of my payload it's always the same result (as long as I don't reach the maximum size in bytes first). So it seems that I am hitting a limit above max_dgram_qlen and wmem_{max, default} that I can't found.
I have of course tried to investigate ulimit or limits.conf without success. ulimit -b doesn't even work on my machine (says "options not found" and returns).
I am working on Debian 10 (buster) but have launched my test program on different OS with the same result : I hit a limit of datagram that I don't know about.
Do you have any idea of which limit I didn't see and I am reaching ? And if I can read or modify this limit ?
I have been injecting packets on the network and watching the effects via wireshark. I am able to correctly set and change tcp ports and set the source and destination. However, I am now having an issue. One of the things I need to do is to set a source port from port 66,000. Every time I try it just puts the number to 1163 in wireshark which is because it is supposed to be a short integer. Does anyone know how to make it accept the big number. I know the big endian and htonl should work so I tried that as well but that didn't solve the issue.
Here is the code I am using
void extract(u_char *user, struct pcap_pkthdr *h, u_char *pack ) {
struct eth_hdr *ethhdr;
struct ip_hdr *iphdr;
struct tcp_hdr *tcphdr;
ethhdr = (struct eth_hdr *)pack;
iphdr = (struct ip_hdr *)(pack + ETH_HDR_LEN);
tcphdr = (struct tcp_hdr *) (pack + ETH_HDR_LEN + (4*iphdr->ip_hl));
//Set the ports
tcphdr->th_sport = htons(66666);
tcphdr->th_dport = htons(atoi(destString));
The port number is 16 bit. With 16 bit you can get only up to 65535. No way around it. See also the TCP header at http://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure.
I'm trying to write a tcp syn port scanner with golang, I found a solution in C version here: http://www.binarytides.com/tcp-syn-portscan-in-c-with-linux-sockets/
I'd like to implement it in go, how can I send a tcp header like this in golang:
//TCP Header
tcph->source = htons ( source_port );
tcph->dest = htons (80);
tcph->seq = htonl(1105024978);
tcph->ack_seq = 0;
tcph->doff = sizeof(struct tcphdr) / 4; //Size of tcp header
tcph->fin=0;
tcph->syn=1;
tcph->rst=0;
tcph->psh=0;
tcph->ack=0;
tcph->urg=0;
tcph->window = htons ( 14600 ); // maximum allowed window size
tcph->check = 0; //if you set a checksum to zero, your kernel's IP stack should fill in the correct checksum during transmission
tcph->urg_ptr = 0;
Do I have to use syscall or cgo? I'm really appreciated if someone could help me out.
You're going to want to use syscall. However the syscall package is not necessarily portable across different Operating Systems so if that matters to you then you'll have to write per os versions and use the file_os.go naming scheme to hold the os specific code.
I'm using Apple's "Simple Ping" example and it has almost all features that I need, but I don't know where I can set timeout of each packet. It seems that it isn't possible because function that is used to write data to socket doesn't have any timeout parameters. Does anybody have idea to change this app to get ability to set timeout like in windows ping command? By timeout I mean time for each packet sent to be discarded after waiting for response too long.
Windows ping command - timeout I need to have:
"-w Timeout : Specifies the amount of time, in milliseconds, to wait for the Echo Reply message that corresponds to a given Echo Request message to be received. If the Echo Reply message is not received within the time-out, the "Request timed out" error message is displayed. The default time-out is 4000 (4 seconds)."
Simple Ping code I'm using:
http://developer.apple.com/library/mac/#samplecode/SimplePing/Introduction/Intro.html
Apple sample code:
bytesSent = sendto(
CFSocketGetNative(self->_socket),
sock,
[packet bytes],
[packet length],
0,
(struct sockaddr *) [self.hostAddress bytes],
(socklen_t) [self.hostAddress length]
);
to change the timeout:
CFSocketNativeHandle sock = CFSocketGetNative(self->_socket);
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 100000; // 0.1 sec
setsockopt(sock, SOL_SOCKET, SO_SNDTIMEO, (void *)&tv, sizeof(tv));
bytesSent = sendto(
sock,
[packet bytes],
[packet length],
0,
(struct sockaddr *) [self.hostAddress bytes],
(socklen_t) [self.hostAddress length]
);
See Apple's docs: setsockopt
From the above referenced doc:
SO_SNDTIMEO is an option to set a timeout value for output operations. It accepts a struct timbal parameter with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. In the current implementation, this timer is restarted each time additional data are delivered to the protocol, implying that the limit applies to output portions ranging in size from the low-water mark to the high-water mark for output.
for example:
tv.tv_sec = 0;
tv.tv_usec = 1000;
setsockopt(recv_sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv,sizeof(struct timeval));
setsockopt(send_sock, SOL_SOCKET, SO_SNDTIMEO, (char *)&tv,sizeof(struct timeval));
for additional options:
http://developer.apple.com/library/ios/#documentation/system/conceptual/manpages_iphoneos/man2/setsockopt.2.html