I am trying to write TCP client-server program, to find the maximum number of concurrent connection handled by server. So I create a simple server and initiated multiple tcp connection (using go routine).
server.go
func main() {
ln, err := net.Listen("tcp", ":9100")
if err != nil {
fmt.Println(err.Error())
}
defer ln.Close()
fmt.Println("server has started")
for {
conn, err := ln.Accept()
if err != nil {
fmt.Println(err.Error())
}
conn.Close()
fmt.Println("received one connection")
}
}
client.go
func connect(address string, wg *sync.WaitGroup) {
conn, err := net.Dial("tcp", address)
if err != nil {
fmt.Println(err.Error())
os.Exit(-1)
}
defer conn.Close()
fmt.Println("connection is established")
wg.Done()
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 10000; i++ {
wg.Add(1)
fmt.Printf("connection %d", i)
fmt.Println()
go connect(":9100", &wg)
}
wg.Wait()
fmt.Print("all the connection served")
}
I was expecting that server will become unresponsive because of TCP SYN attach but my client application got crashed after ~2000 connection request. Following is the error message :
dial tcp :9100: too many open files
exit status 255
I need help for the following questions :
What are the required changes in client.go to emulate the TCP connection flooding on my go server.
How could I increase the number of open FD (mac OS), if there is limitation of maximum open FD for a precess.
What is the default TCP queue size for the Listen() (in C programming language, queue size is specified in the listen(fd, queue_size)).
Edit :
I found some help (question #2) for increasing the number of open file descriptor, but it is not working (no change in client error message).
ulimit settings
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
Related
I am trying to determinate the different limits of the unix datagram sockets, as I am using it as IPC for my project.
The obscure thing I want to control is the size of my socket's internal buffer :
I want to know how many datagrams I can send before my socket would block.
I've understood that 2 differents limits affect the size of the socket's buffer :
/proc/sys/net/core/wmem_{max, default} sets the max (-default) size of a socket's writing buffer
/proc/sys/net/unix/max_dgram_qlen sets the maximum number of datagram the buffer can hold
I know that /proc/sys/net/core/rmem_{max, default} sets the max (-default) size of a socket's reading buffer but as I am working on local unix socket it doesn't seem to have a impact.
I have set wmem_{max, default} to 136314880 (130 MB) and max_dgram_qlen to 500000.
And wrote a small program where the sender socket only sends fixed size datagram to the receiver socket until is would block, I then print the size and number of datagram I was able to send.
Here is the code I used :
#include <err.h>
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <sys/un.h>
#include <unistd.h>
/* Payload size in bytes. */
#define PAYLOAD_SIZE 100
#define CALL_AND_CHECK(syscall) \
if (syscall < 0) { err(1, NULL); }
int main(void)
{
int receiver_socket_fd = socket(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0);
if (receiver_socket_fd < 0)
err(1, NULL);
char* binding_path = "test_socket";
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;
strncpy(addr.sun_path, binding_path, sizeof(addr.sun_path));
/* Check if the file exists, if yes delete it ! */
if (access(binding_path, F_OK) != -1) {
CALL_AND_CHECK(unlink(binding_path));
}
CALL_AND_CHECK(bind(receiver_socket_fd, (struct sockaddr const*) &addr, sizeof(addr)));
int sender_socket_fd = socket(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0);
if (sender_socket_fd < 0)
err(1, NULL);
CALL_AND_CHECK(connect(sender_socket_fd, (struct sockaddr const*) &addr, sizeof(addr)));
struct payload { char data[PAYLOAD_SIZE]; };
/* Create test payload with null bytes. */
struct payload test_payload;
memset(&test_payload.data, 0, PAYLOAD_SIZE);
ssize_t total_size_written = 0;
ssize_t size_written = 0;
do {
size_written = write(sender_socket_fd, (const void *) &test_payload, PAYLOAD_SIZE);
if (size_written > 0)
total_size_written += size_written;
} while (size_written > 0);
printf("socket_test: %zu bytes (%ld datagrams) were written before blocking, last error was :\n", total_size_written, total_size_written / PAYLOAD_SIZE);
perror(NULL);
CALL_AND_CHECK(unlink(binding_path));
CALL_AND_CHECK(close(sender_socket_fd));
CALL_AND_CHECK(close(receiver_socket_fd));
return 0;
}
I was expecting to reach either the max size in bytes of the socket (here 130MB) or the max number of datagram I set (500 000).
But the actual result is that I am only able to write 177494 datagrams before being blocked.
I can change the size of my payload it's always the same result (as long as I don't reach the maximum size in bytes first). So it seems that I am hitting a limit above max_dgram_qlen and wmem_{max, default} that I can't found.
I have of course tried to investigate ulimit or limits.conf without success. ulimit -b doesn't even work on my machine (says "options not found" and returns).
I am working on Debian 10 (buster) but have launched my test program on different OS with the same result : I hit a limit of datagram that I don't know about.
Do you have any idea of which limit I didn't see and I am reaching ? And if I can read or modify this limit ?
I currently have a python program that (very slowly) recieves data from a Red Pitaya board by recursively calling:
redpitaya_scpi.scpi(192.169.1.100).rx_txt()
I would like to use rp_remote_acquire to achieve a higher throughput with a ring buffer.
I am able to execute ./rp_remote_acquire on both the Red Pitaya (server) and a linux machine (client) thanks to stackoverflow.
I get some unique content in /tmp/out every time I execute the following commands on the Red Pitaya (which suggests that the program on the server has access to the data from its hardware).
rm /tmp/out
./rp_remote_acquire -m 3
cat /tmp/out
In order to transfer data from the Red Pitaya (client) to the linux machine (server), I launch ./rp_remote_acquire with the following parameters:
Server (192.169.1.100):
./rp_remote_acquire -m 2 -a 192.169.1.102 -p 14000
Client (192.169.1.102):
./rp_remote_acquire -m 1 -a 192.169.1.100 -p 14000
Where:
-m --mode <(1|client)|(2|server)|(3|file)>
operating mode (default client)
-a --address <ip_address>
target address in client mode (default empty)
-p --port <port_num>
port number in client and server mode (default 14000)
Both machines are able ping eachother and the machines are able to establish a connection (ie. int connection_start(option_fields_t *options, struct handles *handles) at transfer.c:251 returns zero).
The client ends up executing the following code snippet from transfer.c
533 while (!size || transferred < size) {
(gdb) n
534 if (pos == buf_size)
(gdb) n
539 if (pos + CHUNK <= curr) {
(gdb) n
552 memcpy(buf, mapped_base + pos, len);
(gdb) n
554 if (handles->sock >= 0) {
(gdb) n
552 memcpy(buf, mapped_base + pos, len);
(gdb) n
554 if (handles->sock >= 0) {
(gdb) n
555 if (send_buffer(handles->sock, options, buf, len) < 0) {
(gdb) n
569 pos += len;
(gdb) n
533 while (!size || transferred < size) {
It seems like the client is effectively just doing the following (note size = 0 by default):
533 while (!size || transferred < size) {
552 memcpy(buf, mapped_base + pos, len);
552 memcpy(buf, mapped_base + pos, len);
569 pos += len;
}
This behaviour seems to be the intention of the programmer because the client stops as soon as the server is halted:
554 if (handles->sock >= 0) {
(gdb)
556 if (!interrupted)
the program doesn't get stuck in this loop when I change size such that it is not equal to zero (=> smaller packets?).
I would like to be able to access the data that is (hopefully) being sent from the Red Pitaya (server) to the linux machine (client) and somehow make this data available to a python program on the client machine.
My question(s):
What is going on here and how can I access the data?
Do I need to synchronously run a second program on the client that somehow reads the data that rp_remote_acquire is copying into the clients memory?
The solution is surprisingly simple.
When it is running properly in server mode, rp_remote_acquire writes the data to a socket:
/*
* transfers samples to socket via read() call on rpad_scope
*/
static u_int64_t transfer_readwrite(struct scope_parameter *param,
option_fields_t *options, struct handles *handles)
In client mode it reads the data from the socket and does something with it.
Since we are working with sockets here, we don't need to care what rp_remote_acquire does in client mode. We can simply create our own socket with a python script and recieve the data in the script (which is where I want to have the data).
This is an example from #otobrzo:
import socket
import numpy as np
import matplotlib.pyplot as plt
client = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
ip=socket.gethostbyname("XX.XX.XX.XX") # IP of redpitaya in server mode:
# run cat ddrdump.bit > /dev/xdevcfg
#compiled and run on redpitay ./rp_remote_acquire -m 2 -k 0 -c 0 -d 0
port=14000 # default port for TCP
address=(ip,port)
client.connect(address)
Nl = 10000
#while True:
for x in range(0, Nl):
# print("test1")
bytes_data = client.recv(1024) # set the amount data transferred
if x == 0:
data = np.frombuffer(bytes_data, dtype=np.int16) # from 16bit data to int16
data = np.array(data, dtype=float)
data_all = data
else:
data = np.frombuffer(bytes_data, dtype=np.int16) # from 16bit data to int16
data = np.array(data, dtype=float)
data_all= np.hstack((data_all,data))
#%%
FPS = 125e6
time = np.arange(0,np.size(data_all))/FPS
plt.plot(time,data_all)
I'm writing a simple program in Golang to capture TCP/IP packets using raw sockets:
package main
import (
"fmt"
"log"
"net"
)
func main() {
netaddr, err := net.ResolveIPAddr("ip", "127.0.0.1")
if err != nil {
log.Fatal(err)
}
conn, err := net.ListenIP("ip:tcp", netaddr)
if err != nil {
log.Fatal(err)
}
packet := make([]byte, 64*1024)
numPackets := 0
totalLen := 0
for {
packetLen, _, err := conn.ReadFrom(packet)
if err != nil {
log.Fatal(err)
}
dataOffsetWords := (packet[12] & 0xF0) >> 4
dataOffset := 4 * dataOffsetWords
payload := packet[dataOffset:packetLen]
numPackets += 1
totalLen += len(payload)
fmt.Println("Num packets:", numPackets, ", Total len:", totalLen)
}
}
When I compare the number of packets which the program receives and the total amount of data they contain to the number of packets Wiresharks sees and the total data transmitted, I know I've lost 15-30 % of all packets and data on every run.
Why?
The only thing that comes to my mind is that the application is not fast enough to receive the packets, but that's odd. (I'm communicating on localhost and sending ~ 17 MB of data.) Goreplay however uses something similar and works.
The traffic I'm receiving is created by curl-ing onto a locally running Python server (http.server) and sending a huge file in the request body. The Python server downloads the whole body successfully.
My initial suspiction was right: by modifying the receiving Python server so that it does not read all incoming data at once:
post_body = self.rfile.read(content_len)
but rather in a loop with a 10 ms delay between iterations:
while (total_len < content_len):
post_body = self.rfile.read(min(16536, content_len - total_len))
total_len += len(post_body)
time.sleep(1 / 100.)
I receive all data in the Golang and the number of packets processed is approximately te same (I assume there will only rarely be a match as the packets received in Golang were already processed, i.e. put back into order. Or at least I hope, because there's no much documentation.)
Also, the packet loss changes when the delay is adjusted.
I have a simple listening socket, that stops accepting socket, returing EN_FILE after only 13 connections.
I have tried using sysctl in the following manner:
$ sysctl kern.maxfiles
kern.maxfiles: 12288
$ sysctl kern.maxfilesperproc
kern.maxfilesperproc: 10240
$ sudo sysctl -w kern.maxfiles=1048600
kern.maxfiles: 12288 -> 1048600
$ sudo sysctl -w kern.maxfilesperproc=1048576
kern.maxfilesperproc: 10240 -> 1048576
$ ulimit -S -n
256
$ ulimit -S -n 1048576
$ ulimit -S -n
1048576
But this doesn't seem to solve the issue, is a reboot required specifically on OSX? I need it for a singular test so I wasn't planning on making it permanent in /etc/sysctl.conf
Socket creation:
#if os(Linux)
fileDescriptor = Glibc.socket(AF_INET, Int32(SOCK_STREAM.rawValue), 0)
#else
fileDescriptor = Darwin.socket(AF_INET, SOCK_STREAM, 0)
#endif
And the accepting part:
let result = withUnsafePointer(to: &address) {
$0.withMemoryRebound(to: sockaddr.self, capacity: 1) { sockAddress in
// Do nothing
bind(fileDescriptor, sockAddress, UInt32(MemoryLayout<sockaddr_in>.size))
}
}
'
let clientFD = accept(fileDescriptor, nil, nil)
if( clientFD == EMFILE || clientFD == ENFILE ) {
print("[\(type(of: self))] WARNING: Maximum number of open connections has been reached")
close(clientFD)
return nil
}
Notes
libtls ( LibreSSL 2.5.5 ) is used after accept().
EN_FILE is the value returned, where I'd personally expect EM_FILE
You're comparing the accepted file descriptor against error codes. That doesn't make sense. Since file descriptors and error codes are both typically small integers, of course you're going to get a "match" eventually.
You want to compare clientFD to -1 and then check errno against EMFILE or ENFILE.
I am writing an application which is continuously sending and receiving data. My initial send/receive is running successfully but when I am expecting data of size 512 bytes in the recvfrom I get its return value as -1 which is "Resource temporarily unavailable." and errno is set to EAGAIN. If I use a blocking call i.e. without Timeout the application just hangs in recvfrom. Is there any max limit on recvfrom on iPhone? Below is the function which receives data from the server. I am unable to figure out what can be going wrong.
{ struct timeval tv;
tv.tv_sec = 3;
tv.tv_usec = 100000;
setsockopt (mSock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof tv);
NSLog(#"Receiving.. sock:%d",mSock);
recvBuff = (unsigned char *)malloc(1024);
if(recvBuff == NULL)
NSLog(#"Cannot allocate memory to recvBuff");
fromlen = sizeof(struct sockaddr_in);
n = recvfrom(mSock,recvBuff,1024,0,(struct sockaddr *)&from, &fromlen);
if (n == -1) {
[self error:#"Recv From"];
return;
}
else
{
NSLog(#"Recv Addr: %s Recv Port: %d",inet_ntoa(from.sin_addr), ntohs(from.sin_port));
strIPAddr = [[NSString alloc] initWithFormat:#"%s",inet_ntoa(from.sin_addr)];
portNumber = ntohs(from.sin_port);
lIPAddr = [KDefine StrIpToLong:strIPAddr];
write(1,recvBuff,n);
bcopy(recvBuff, data, n);
actualRecvBytes = n;
free(recvBuff);
}
}
Read the manpage:
If no messages are available at the socket, the receive call waits for a message to arrive, unless the socket is nonblocking (see fcntl(2)) in which case the value -1 is returned and the external variable errno set to EAGAIN.
I was writing a UDP application and think I came across a similar issue. Peter Hosey is correct in stating that the given result of recvfrom means that there is no data to be read; but you were wondering, how can there be no data?
If you are sending several UDP datagrams at a time from some host to your iphone, some of those datagrams may be discarded because the receive buffer size (on the iphone) is not large enough to accommodate that much data at once.
The robust way to fix the problem is to implement a feature that allows your application to request a retransmission of missing datagrams. A not as robust solution (that doesn't solve all the issues that the robust solution does) is to simply increase the receive buffer size using setsockopt(2).
The buffer size adjustment can be done as follows:
int rcvbuf_size = 128 * 1024; // That's 128Kb of buffer space.
if (setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF,
&rcvbuf_size, sizeof(rcvbuf_size)) == -1) {
// put your error handling here...
}
You may have to play around with buffer size to find what's optimal for your application.
For me it was a casting issue. Essentially a was assigning the returned value to an int instead of size_t
int rtn = recvfrom(sockfd,... // wrong
instead of:
size_t rtn = recvfrom(sockfd,...// correct