I'm writing a simple program in Golang to capture TCP/IP packets using raw sockets:
package main
import (
"fmt"
"log"
"net"
)
func main() {
netaddr, err := net.ResolveIPAddr("ip", "127.0.0.1")
if err != nil {
log.Fatal(err)
}
conn, err := net.ListenIP("ip:tcp", netaddr)
if err != nil {
log.Fatal(err)
}
packet := make([]byte, 64*1024)
numPackets := 0
totalLen := 0
for {
packetLen, _, err := conn.ReadFrom(packet)
if err != nil {
log.Fatal(err)
}
dataOffsetWords := (packet[12] & 0xF0) >> 4
dataOffset := 4 * dataOffsetWords
payload := packet[dataOffset:packetLen]
numPackets += 1
totalLen += len(payload)
fmt.Println("Num packets:", numPackets, ", Total len:", totalLen)
}
}
When I compare the number of packets which the program receives and the total amount of data they contain to the number of packets Wiresharks sees and the total data transmitted, I know I've lost 15-30 % of all packets and data on every run.
Why?
The only thing that comes to my mind is that the application is not fast enough to receive the packets, but that's odd. (I'm communicating on localhost and sending ~ 17 MB of data.) Goreplay however uses something similar and works.
The traffic I'm receiving is created by curl-ing onto a locally running Python server (http.server) and sending a huge file in the request body. The Python server downloads the whole body successfully.
My initial suspiction was right: by modifying the receiving Python server so that it does not read all incoming data at once:
post_body = self.rfile.read(content_len)
but rather in a loop with a 10 ms delay between iterations:
while (total_len < content_len):
post_body = self.rfile.read(min(16536, content_len - total_len))
total_len += len(post_body)
time.sleep(1 / 100.)
I receive all data in the Golang and the number of packets processed is approximately te same (I assume there will only rarely be a match as the packets received in Golang were already processed, i.e. put back into order. Or at least I hope, because there's no much documentation.)
Also, the packet loss changes when the delay is adjusted.
Related
Im using a rest api for getting orders from a webshop, the result is max 100 orders at a time and there is a page param to use for next 100 orders.
I have solve this in a viewmodel using periodic action (disables = not self.oclIsIn(#InProgress)) like this:
vCount := self.Orders->size;
vPage := vPage + 1;
vResponse := selfVM.RestGet(
String.Format('{0}/wp-json/wc/v3/orders?per_page={1}&page={2}',self.Channel.Url, self.Channel.Source.MaxCount, vPage),
self.Channel.UserName,
self.Channel.Password,'');
vResponse := String.Format('{0}{1}{2}','{"orders":', vResponse,'}');
--self.JSonData := self.JSonData + vResponse;
vOutput := vOutput + self.MergeTaJson( ImportBatch.Viewmodels.WooOrderJSon, vResponse );
if ((self.Orders->size - vCount) = 0) then
if vOutput->IsNullOrWhiteSpace then
self.Fail
else
--self.ParsingLog := vOutput;
self.Success
endif;
true
else
vCount := self.Orders->size;
false
endif
It will go on and on until result from the api is 0 then state sets to Done and it stops.
Now I want to do the same server side, but how do I solve this without a periodic action like this? And I want to do the same in a AsyncTicket "action".
Is it possible for you to see the whole serverside job as a periodic action?
Each execution of a SS job should fit in "one go" and saved in one transaction
I started the following code to handle a Bosch BME280 sensor with a Nucleo-F446ZE and a Nucleo-F411RE boards.
with STM32.Device; use STM32.Device;
with STM32.GPIO; use STM32.GPIO;
with STM32; use STM32;
with STM32.I2C;
with HAL.I2C; use HAL.I2C;
use HAL;
procedure Simple_I2C_Demo is
-- I2C Bus selected
Selected_I2C_Port : constant access STM32.I2C.I2C_Port := I2C_1'Access;
Selected_I2C_Port_AF : constant GPIO_Alternate_Function := GPIO_AF_I2C1_4;
Selected_I2C_Clock_Pin : GPIO_Point renames PB8;
Selected_I2C_Data_Pin : GPIO_Point renames PB9;
Port : constant HAL.I2C.Any_I2C_Port := Selected_I2C_Port;
-- Shift one because of 7-bit addressing
I2C_Address : constant HAL.I2C.I2C_Address := 16#76# * 2;
procedure SetupHardware is
GPIO_Conf_AF : GPIO_Port_Configuration (Mode_AF);
Selected_Clock_Speed : constant := 10_000;
begin
Enable_Clock (Selected_I2C_Clock_Pin);
Enable_Clock (Selected_I2C_Data_Pin);
Enable_Clock (Selected_I2C_Port.all);
STM32.Device.Reset (Selected_I2C_Port.all);
Configure_Alternate_Function (Selected_I2C_Clock_Pin, Selected_I2C_Port_AF);
Configure_Alternate_Function (Selected_I2C_Data_Pin, Selected_I2C_Port_AF);
GPIO_Conf_AF.AF_Speed := Speed_100MHz;
GPIO_Conf_AF.AF_Output_Type := Open_Drain;
GPIO_Conf_AF.Resistors := Pull_Up;
Configure_IO (Selected_I2C_Clock_Pin, GPIO_Conf_AF);
Configure_IO (Selected_I2C_Data_Pin, GPIO_Conf_AF);
STM32.I2C.Configure
(Selected_I2C_Port.all,
(Clock_Speed => Selected_Clock_Speed,
Addressing_Mode => STM32.I2C.Addressing_Mode_7bit,
Own_Address => 16#00#, others => <>));
STM32.I2C.Set_State (Selected_I2C_Port.all, Enabled => True);
end SetupHardware;
ID : HAL.I2C.I2C_Data (1 .. 1);
Status : HAL.I2C.I2C_Status;
begin
SetupHardware;
HAL.I2C.Mem_Read (This => Port.all,
Addr => I2C_Address,
Mem_Addr => 16#D0#,
Mem_Addr_Size => HAL.I2C.Memory_Size_8b,
Data => ID,
Status => Status,
Timeout => 15000);
if Status /= Ok then
raise Program_Error with "I2C read error:" & Status'Img;
end if;
end Simple_I2C_Demo;
In this simple example, I always get an error status at the end of reading. In the context of a more complete code, I always get a Busy status after waiting 15secs.
I really don't see what is going on as my code is largely inspired from the code I found on Github for a I2C sensor.
Maybe I forgot a specific code for I2C init but as I'm not an expert, I prefer to ask to experts :)
Finally found what was wrong. After testing with C using STM HAL and investigating the Ada configuration code, I found that a line was missing:
GPIO_Conf_AF.AF_Speed := Speed_100MHz;
GPIO_Conf_AF.AF_Output_Type := Open_Drain;
GPIO_Conf_AF.Resistors := Pull_Up;
-- Missing configuration part of the record
GPIO_Conf_AF.AF := Selected_I2C_Port_AF;
-- That should be present even though there was a call to configure
-- each pin few lines above
Configure_IO (Selected_I2C_Clock_Pin, GPIO_Conf_AF);
Configure_IO (Selected_I2C_Data_Pin, GPIO_Conf_AF);
Using Configure_IO after Configure_Alternate_Function crushes the configuration and, as there was a part of the record which was left uninitialized, the GPIO were incorrectly configured.
To be more precise, after looking at the code inside the GPIO handling, Configure_IO calls Configure_Alternate_Function using the AF part of the GPIO_Port_Configuration record. In my case, it was resetting it.
With the missing line, the code now runs correctly with Mem_Read and Master_Transmit/Master_Receive.
A big thanks to ralf htp for advising me to dive into the generated C code.
No, between HAL_I2C_Mem_Read and the HAL_I2C_Master_Transmit, wait, HAL_I2C_Master_Receive procedure is only a nuance cf How do I use the STM32CUBEF4 HAL library to read out the sensor data with i2c? . If you know what size of data you want to receive you can use the HAL_I2C_Master_Transmit, wait, HAL_I2C_Master_Receive procedure.
A C++ HAL I2C example is in https://letanphuc.net/2017/05/stm32f0-i2c-tutorial-7/
//Trigger Temperature measurement
buffer[0]=0x00;
HAL_I2C_Master_Transmit(&hi2c1,0x40<<1,buffer,1,100);
HAL_Delay(20);
HAL_I2C_Master_Receive(&hi2c1,0x40<<1,buffer,2,100);
//buffer[0] : MSB data
//buffer[1] : LSB data
rawT = buffer[0]<<8 | buffer[1]; //combine 2 8-bit into 1 16bit
Temperature = ((float)rawT/65536)*165.0 -40.0;
//Trigger Humidity measurement buffer[0]=0x01;
HAL_I2C_Master_Transmit(&hi2c1,0x40<<1,buffer,1,100);
HAL_Delay(20);
HAL_I2C_Master_Receive(&hi2c1,0x40<<1,buffer,2,100);
//buffer[0] : MSB data
//buffer[1] : LSB data
rawH = buffer[0]<<8 | buffer[1]; //combine 2 8-bit into 1 16bit
Humidity = ((float)rawH/65536)*100.0; HAL_Delay(100); }
Note that it uses HAL_I2C_Master_Transmit, waits 20 ms until the slave puts the data on the bus and then receives it with HAL_I2C_Master_Receive. This code is working, i tested it myself.
Possibly the problem is that the BME280 supports single byte reads and multi-byte reads (until it sends a NOACK and stop). HAL_I2C_Mem_Read waits for the ACK or stop but for some reasons it does not get it what causes the Busy and then Timeout behavior, cf page 33 of the datasheet http://www.embeddedadventures.com/datasheets/BME280.pdf for the multibyte read. You specified timeout to 15 sec and you get the timeout after 15 secs. So it appears that the BME280 simply does not stop sending or it sends nothing including not a NOACK and Stop condition ...
HAL_I2C_Mem_Read sometimes causes problems, this depends on the slave https://community.arm.com/developer/ip-products/system/f/embedded-forum/7989/trouble-getting-values-with-i2c-using-hal_library
By the way with the
HAL.I2C.Mem_Read (This => Port.all,
Addr => I2C_Address,
Mem_Addr => 16#D0#,
Mem_Addr_Size => HAL.I2C.Memory_Size_8b,
Data => ID,
Status => Status,
Timeout => 15000);
you try to read 1 byte the chip identification number from register D0 cf http://www.embeddedadventures.com/datasheets/BME280.pdf page 26
I need to send a RST flag in one socket, is it possible?
When I close the TIdTCPClient, the TCP server program tells me that the connection is still alive, and after 2 minutes the connection disappears.
I'm doing this :
TCPCliente := TIdTCPClient.Create(nil);
TCPCliente.Host := edtIP.Text;
TCPCliente.Port := 492;
TCPCliente.ConnectTimeout := 1000;
TCPCliente.Connect;
TCPCliente.Socket.WriteLnRFC(TextoEnvio, IndyTextEncoding_8Bit);
Texto := TCPCliente.Socket.ReadLn(IndyTextEncoding_8Bit);
TCPCliente.Socket.Close;
TCPCliente.Disconnect;
TCPCliente.Free;
I'm using RAD Studio XE8.
I am trying to write TCP client-server program, to find the maximum number of concurrent connection handled by server. So I create a simple server and initiated multiple tcp connection (using go routine).
server.go
func main() {
ln, err := net.Listen("tcp", ":9100")
if err != nil {
fmt.Println(err.Error())
}
defer ln.Close()
fmt.Println("server has started")
for {
conn, err := ln.Accept()
if err != nil {
fmt.Println(err.Error())
}
conn.Close()
fmt.Println("received one connection")
}
}
client.go
func connect(address string, wg *sync.WaitGroup) {
conn, err := net.Dial("tcp", address)
if err != nil {
fmt.Println(err.Error())
os.Exit(-1)
}
defer conn.Close()
fmt.Println("connection is established")
wg.Done()
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 10000; i++ {
wg.Add(1)
fmt.Printf("connection %d", i)
fmt.Println()
go connect(":9100", &wg)
}
wg.Wait()
fmt.Print("all the connection served")
}
I was expecting that server will become unresponsive because of TCP SYN attach but my client application got crashed after ~2000 connection request. Following is the error message :
dial tcp :9100: too many open files
exit status 255
I need help for the following questions :
What are the required changes in client.go to emulate the TCP connection flooding on my go server.
How could I increase the number of open FD (mac OS), if there is limitation of maximum open FD for a precess.
What is the default TCP queue size for the Listen() (in C programming language, queue size is specified in the listen(fd, queue_size)).
Edit :
I found some help (question #2) for increasing the number of open file descriptor, but it is not working (no change in client error message).
ulimit settings
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
I am writing an application which is continuously sending and receiving data. My initial send/receive is running successfully but when I am expecting data of size 512 bytes in the recvfrom I get its return value as -1 which is "Resource temporarily unavailable." and errno is set to EAGAIN. If I use a blocking call i.e. without Timeout the application just hangs in recvfrom. Is there any max limit on recvfrom on iPhone? Below is the function which receives data from the server. I am unable to figure out what can be going wrong.
{ struct timeval tv;
tv.tv_sec = 3;
tv.tv_usec = 100000;
setsockopt (mSock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof tv);
NSLog(#"Receiving.. sock:%d",mSock);
recvBuff = (unsigned char *)malloc(1024);
if(recvBuff == NULL)
NSLog(#"Cannot allocate memory to recvBuff");
fromlen = sizeof(struct sockaddr_in);
n = recvfrom(mSock,recvBuff,1024,0,(struct sockaddr *)&from, &fromlen);
if (n == -1) {
[self error:#"Recv From"];
return;
}
else
{
NSLog(#"Recv Addr: %s Recv Port: %d",inet_ntoa(from.sin_addr), ntohs(from.sin_port));
strIPAddr = [[NSString alloc] initWithFormat:#"%s",inet_ntoa(from.sin_addr)];
portNumber = ntohs(from.sin_port);
lIPAddr = [KDefine StrIpToLong:strIPAddr];
write(1,recvBuff,n);
bcopy(recvBuff, data, n);
actualRecvBytes = n;
free(recvBuff);
}
}
Read the manpage:
If no messages are available at the socket, the receive call waits for a message to arrive, unless the socket is nonblocking (see fcntl(2)) in which case the value -1 is returned and the external variable errno set to EAGAIN.
I was writing a UDP application and think I came across a similar issue. Peter Hosey is correct in stating that the given result of recvfrom means that there is no data to be read; but you were wondering, how can there be no data?
If you are sending several UDP datagrams at a time from some host to your iphone, some of those datagrams may be discarded because the receive buffer size (on the iphone) is not large enough to accommodate that much data at once.
The robust way to fix the problem is to implement a feature that allows your application to request a retransmission of missing datagrams. A not as robust solution (that doesn't solve all the issues that the robust solution does) is to simply increase the receive buffer size using setsockopt(2).
The buffer size adjustment can be done as follows:
int rcvbuf_size = 128 * 1024; // That's 128Kb of buffer space.
if (setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF,
&rcvbuf_size, sizeof(rcvbuf_size)) == -1) {
// put your error handling here...
}
You may have to play around with buffer size to find what's optimal for your application.
For me it was a casting issue. Essentially a was assigning the returned value to an int instead of size_t
int rtn = recvfrom(sockfd,... // wrong
instead of:
size_t rtn = recvfrom(sockfd,...// correct