Snort not showing blocked/dropped packets - snort

I'm trying to detect ping flood attacks with Snort. I have included the rule
(drop icmp any any -> any any (itype:8; threshold, track by_src, count 20, seconds; msg:"Ping flood attack detected"; sid:100121))
in the Snort's ddos.rule file.
I'm attacking using the command
hping3 -1 --fast
The ping statistics in the attacking machine says
100% packet loss
However, the Snort action stats shows the verdicts as
Block ->0.
Why is this happening?

A few things to note:
1) This rule is missing the value for seconds. You need to specify a timeout value, you currently have "seconds;" You need something like "seconds 5;". Since this is not valid I'm not sure when snort is actually going to generate an alert, which means it may just be dropping all of the icmp packets, but not generating any alerts.
2) This rule is going to drop EVERY icmp packet for itype 8. The threshold only specifies when to alert, not when to drop. So this is going to drop all packets that match and then generate 1 alert per 20 that it drops. See the manual on rule thresholds here.
3) If you do not have snort configured in inline mode, you will not be able to actually block any packets. See more on the information about the three different modes here.
If you just want to detect and drop ping floods you should probably change this to use the detection_filter option, instead of threshold. If you want to allow legitimate pings, and drop ping floods you do not want to use threshold because the way you have this rule written it will block all icmp itype 8 packets. If you use detection_filter you can write a rule that if snort sees 20 pings in 5 seconds from the same source host then drop. Here is an example of what your rule might look like:
drop icmp any any -> any any (itype:8; detection_filter:track by_src, count 20, seconds 5; sid:100121)
If snort sees 20 pings from the same source host within 5 seconds of each other it will then drop and generate an alert. See the snort manual for detection filters here.
With this configuration, you can allow legitimate pings on the network and block ping floods from the same source host.

Related

STM32 UART in DMA mode stops receiving after receiving from a host with wrong baud rate

The scenario: I have a STM32 MCU, which uses an UART in DMA Mode with Idle Interrupt for RS485 data transfer. The baud rate of the UART is set in CubeMX, in this case to 115200. My Code works fine, when the Host uses the correct baud rate, it is also "long time" stable, no issues or worries.
BUT: when I set the wrong baud rate at the host, e.g. 56700 instead of 115200, the UART stops receiving data, even if I later set the baud rate at the host to the same baud rate the Microcontroller uses, it won't work. The only way to solve this issue so far is: reset the MCU and connect again with the correct baud rate.
To give you some (Pseudo-)Code:
uint8_t UART_Buf[128];
HAL_UART_Receive_DMA(&huart2, UART_Buf, 128);
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE);
Or in Plain Words: there is a UART Buffer for DMA (UART_Buf[128]) and the UART is started with HAL_UART_Receive_DMA(...), DMA Rx is set to circular mode in CubeMX, also the Idle-Interrupt is activated, using the HAL Macro: __HAL_UART_ENABLE_IT(...); This code works fine so far.
Works fine means:
when I transmit data from my PC to the Micro, the (one) Idle Interrupt is triggered (correctly) by the MCU. In the ISR I set a flag, to start the data parsing afterwards. I receive exactly the number of bytes I have sent, and all is fine.
BUT: when I make the wrong setting in my Terminal Program and instead of the (correct) baud rate of 115200, the baud rate select menu is set to e.g. 57600, the trouble begins:
The idle interrupt will still trigger after each transmission.
But it triggers 2-4 times in a quick "burst" (depending on the baud rate) and the number of bytes received is 0. I'd expect at least some bs data, but there is exactly 0 data in the buffer - which I can check with the debugger. There is obviously received nothing. When I change the baud rate in my terminal program and restart it, there is still nothing received on the MCU.
I could live with 0 received bytes, if the baud rate of the host is incorrect, but it's pretty uncool that one incoming transmission of a host with the wrong baud rate disables the UART until a hardware reset is done.
My attempts to resolve this were so far:
count the "Idle Interrupt Bursts" in combination with 0 received bytes to trigger a "self reset" routine, that stops the UART and restarts it, using the MX_USART2_UART_Init(); Routine. With zero effect. I can see the Idle Interrupt is still triggered correctly, but the buffer remains empty and no data is transferred into the buffer. The UART remains in a non-receiving state.
The Question
Has anyone out there experienced similar issues, and if yes: how did you solve that?
Additional Info: this happens on a STM32F030 as well as on a STM32G03x
When you send to the UART at the wrong baud rate it will appear to the receiver as framing errors and/or noise errors. It could also appear as random characters being received correctly, but this is less likely so don't be surprised to have nothing in your buffer.
When you are receiving with DMA, it is normal to turn the error interrupt on or else poll the error bits. When an error is detected you would then re-initialize everything and restart the DMA. This sounds like what you are trying to do by counting the idle interrupts, but you are just not checking the right bits.
If you don't want to do that, it is not impossible to imagine that you have nothing to do at the driver level and want to try to do the resynchronisation at a higher level (eg: start reading again and discard everything until a newline character) but you will have to bear in mind at least two things:
First, make sure you clear the DDRE bit in the USART_CR3 register. The name "DMA Disable on Reception Error" speaks for itself.
Second, the UART peripheral is able to self resynchronize, as long as you have an idle gap between bytes. If you switch the transmitter to the correct baud rate but keep blasting out data then the receiver may never correctly identify which bit is a start bit.
After investigating this issue a little bit further, i found a solution.
Abstract:
When a host connects to the MCU to an UART with an other baud rate than the UART is set to, it will go into an error state and stop DMA transmission to the RX Buffer. You can check if there is an error with the HAL_UART_GetError(...) function. If there is an error, stop the UART/DMA and restart it.
The Details:
First of all, it was not the DDRE bit in the USART_CR2 register. This was set to 0 by CubeMX. But the hint of Tom V led me into the right direction.
I tried to recover the UART by playing around with the register bits. I read through the UART section of the reference manual multiple times and tried to figure out, which bits to set in which order, to resolve the error condition manually.
What I found out:
When a transmission with the wrong baud rate is received by the UART the following changes in the UART Registers occur (on an STM32F030):
Control register 1 (USART_CR1) - Bit 8 (PEIE) goes from 1 to 0. PEIE is the Parity Interrupt Enable Bit.
Control register 2 (USART_CR2) - remains unchanged
Control register 3 (USART_CR3) - changes from 0d16449 to 0d16384, which means
Bit 0 (EIE - Error Interrupt enable) goes from 1 to 0
Bit 6 (DMAR - DMA enable receiver) goes from 1 to 0
Bit 14 (DEM - Driver enable mode) remains unchanged at 1
USART_CR3.DEM makes sense. I am using the RS485-Functionality of the F030, so the UART handles the Driver-Enable GPIO by itself.
the transition from 1 to 0 at USART_CR3.EIE and USART_CR3.DMAR are most probably the reason why no more data are transfered to the DMA buffer.
Besides that, the error Flags in the Interrupt and status register (USART_ISR) for ORE and FE are set. ORE stands for Overrun Error and FE for Frame Error. Although these bit can be cleared by writing a 1 to the corresponding bit of the Interrupt flag clear register (USART_ICR), the ErrorCode in the hUART Struct remains at the intial error value.
At the end of my try&error process, I managed to have all registers at the same values they had during valid transmissions, but there were still no bytes received. Whatever i tried, id had no effect. The UART remained in a non receiving state. So i decided to use the "brute force" approach and use the HAL functions, which I know they work.
Finally the solution is pretty simple:
if an Idle Interrupt is detected, but the number of received bytes is 0
=> check the Error-Status of the UART with HAL_UART_GetError(...)
If there is an error, stop the UART with HAL_UART_DMAStop(...) and restart it with HAL_UART_Receive_DMA(...)
The code:
if(RxLen) {
// normal execution, number of received bytes > 0
if(UA_RXCallback[i]) (*UA_RXCallback[i])(hUA); // exec RX callback function
} else {
if(HAL_UART_GetError(&huart2)) {
HAL_UART_DMAStop(&huart2); // STOP Uart
MX_USART2_UART_Init(); // INIT Uart
HAL_UART_Receive_DMA(&huart2, UA2_Buf, UA2_BufSz); // START Uart DMA
__HAL_UART_CLEAR_IDLEFLAG(&huart2); // Clear Idle IT-Flag
__HAL_UART_ENABLE_IT(&huart2, UART_IT_IDLE); // Enable Idle Interrupt
}
}
I had a similar issue. I'm using a DMA to receive data, and then periodically checking how many bytes were received. After a bit error, it would not recover. The solution for me was to first subscribe to ErrorCallback on the UART_HandleTypeDef.
In the error handler, I then call UART_Start_Receive_DMA(...) again. This seems to restart the UART and DMA without issue.

Trouble with communication with usb B type machine with Matlab

I am using matlab to communicate with several machines.
I am trying to connect with LCC25 (Liquid crystal retarder controller made by Thorlabs) using usb b to usb a cable.
I made a code like this.
clear all; clc;
%%
ss=serial('COM7','BaudRate',9600,'DataBits',8);
set(ss,'Parity','none');
set(ss,'Terminator','LF');
fopen(ss);
fprintf(ss,'*idn?');
aa=fscanf(ss)
fclose(ss)
Then I get "Warning : Unsuccessful read : A timeout occurred before the Terminator was reached aa=="
Is there any problem in my code?
I am also interested in buying the LCC25 and controlling it with MATLAB, so this is very interesting for me and I would love to find out whether it works...
To debug your code, I am wondering what happens when you comment out everything but:
ss=serial('COM7','BaudRate',9600,'DataBits',8);
set(ss,'Parity','none');
set(ss,'Terminator','LF');
fopen(ss);
Since then we can now if the problem is in establishing the connection itself (which you should not run every time btw!), or in trying to send a command to the device...
If the object creation is succesful, you should see something like this:
Serial Port Object : Serial-COM4
Communication Settings
Port: COM7
BaudRate: 9600
Terminator: 'LF'
Communication State
Status: closed
RecordStatus: off
Read/Write State
TransferStatus: idle
BytesAvailable: 0
ValuesReceived: 0
ValuesSent: 0
Then you can try to add run
fopen(ss)
fscanf(ss)
in a seperate file, and see what the output is. If all of this works, you can start to try sending commands using the 'fprintf' command, but make sure not to run the 'serial' and 'fopen' command every
I am wondering where you obtained the command string '*idn?', did you find this in the help file? The same for the terminator 'LF', are you sure this is the correct terminator to use for the LCC25? When reading the error message you received, I suspect the problem to be that you might need to use other terminators, such as 'CR'.

packets lost xbee series 1

I have two xbee's series 1. I have them as endpoint devices working in API mode and talking to each other. The first xbee is attached at a raspberry pi, while the other is on my pc where I see the terminal tab of XCTU program. The baud rate I use is 125000.
From raspberry pi I try to send a jpg image which is 30Kbytes. I send data frames 100 byte long (the biggest as it is said in the xbee documentation). Inside a loop I create and send the packets, I have also a cout statement that prints the loop number. Everything is fine and all bytes are sent. When I comment out the cout statement not all bytes are sent.
From what I have understood the cout statement works as a delay between packets, but I still cannot understand why is this happening as it is supposed that I use the half speed ...
I hope I was clear and look forward for a reply.
UPDATE
Just to summarize, i changed baud rate to 250000 where there is the same behavior as in 125000. I also implemented hardware flow control by checking cts signal. When xbees are in transparent mode I need a delay between sending characters at around 150us. The same goes for api mode too. The difference with 125000 baud rate in api mode was that the delay needed, was enough to be betwween each data packet, but in 250000 the delay is needed between each byte that i send. If i do the above everything goes well.
The next thing i did was to plug both xbees in my pc in transparent mode. I went to terminal tab of xctu software where i chose assemble packet and sent at around 3000 bytes to the other xbee. The result was the same. The second xbee received at about 1500 bytes and then each time that i was sending one byte from the first to the second, the "lost bytes" were being received at packets of 1000. :/
So could anyone know what am I doing wrong?
You should connect the /CTS pin from the XBee module into the Raspberry Pi, and have your routine stop sending data when the XBee de-asserts it.
At higher baud rates, it's possible to stream data into the XBee module faster than it can send to the remote module. The local XBee module uses the /CTS pin to notify the host when its buffers are almost full and the host should stop sending. People refer to this as hardware flow control.
It may be necessary to modify the serial driver on the Raspberry Pi to make use of that signal -- it should pause the transmit buffer when de-asserted, and automatically resume sending when re-asserted.

Socket SO_SNDTIMEO Timout is double the set value

I recently patched my copy of GStreamer 0.10.36 to time out the tcpclientsink if the network connection is switched between wired/wireless (More information at Method to Cancel/Abort GStreamer tcpclientsink Timeout). It's a simple change. I just added the following to the gst_tcp_client_sink_start() function of gsttcpclientsink.c:
struct timeval timeout;
timeout.tv_sec = 60;
timeout.tv_usec = 0;
...
setsockopt (this->sock_fd.fd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, sizeof(timeout));
The strange thing is that the actual timeout (measured by wall clock time) is always double the value I set. If I disrupt the network connection with the timeout set to 60 seconds, it will take 120 seconds for GStreamer/socket to abort. If I set the timeout to 30 seconds, it will take 60 seconds. If I set the timeout to 180 seconds, it will take 360 seconds. Is there something about sockets that I don't understand that might be causing this behavior? I'd really like to know what's going on here.
This might be a duplicate of Socket SO_RCVTIMEO Timeout is double the set value in C++/VC++
I'm pasting my answer below since I think I had a similar problem.
Pasted answer
SO_RCVTIMEO and SO_SNDTIMEO do not work on all socket operations, you should use non-blocking mode and select.
The behaviour may change on different operating system configurations.
On my system the connect timeouts after two times the value I set in SO_RCVTIMEO. A quick hack like setting SO_RCVTIMEO to x/2 before a connect and x after it works, but the proper solution is using select.
References
Discussion on this problem (read comments to answer):
https://stackoverflow.com/a/4182564/4074995
How to use select to achive the desired result:
http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
C: socket connection timeout

Using `chan pending output` instead of writable fileevent

Yo, I've written a server with a simple protocol: the client sends a line, the server sends a line back in response, repeat. To prevent a client from filling Tcl's output buffer by sending lots of lines but not accepting data back, can I just check chan pending output instead of using the writable fileevent?
proc respond {stream msg} {
if {[chan pending output $stream] <= 1024} {
puts $stream $msg
} else {
#close $stream
}
}
For output, chan pending output will correctly describe the number of bytes waiting in the output queue. Normally, that value will be bounded by the -buffersize value that you chan configure (or fconfigure) it to have.
That value will only be exceeded when the channel is non-blocking; with a blocking channel, when the value would go over it, instead there's a blocking write to the underlying device (socket, pipe, file, serial line, whatever) so by the time you could see that it went over, it's back under the limit again.
But if you're using non-blocking channels, you really should use chan event (or fileevent). Luckily for the actual writes, Tcl will actually do this for you automatically; the single most useful thing you could want from a writable event is already there. In practice, the most common actual use of a writable event is in detecting when an async socket connection becomes ready for service.
So what you are doing will work, but you'll have to think carefully about what to do if the output buffer is “getting full”; the idea that a message can need to be delayed is a place where a simple abstraction tends to become leaky. With 8.6's coroutines, you could (probably) do a transparent suspend or something like that, but getting that sort of thing right can take a little thought. (For example, a GUI client might need to show a busy indicator and put things into a state where the user can't enter more requests.)