I am trying to write a rule to detect if a given event has occurred for 'n' number times in last 'm' duration of time.
I am using drools version 5.4.Final. I have also tried 5.5.Final with no effect.
I have found that there are a couple of Conditional Elements, as Drools call it, accumulate and collect. I have used collect in my sample rule below
rule "check-login-attack-rule-1"
dialect "java"
when
$logMessage: LogMessage()
$logMessages : ArrayList ( size >= 3 )
from collect(LogMessage(getAction().equals(Action.Login)
&& isProcessed() == false)
over window:time(10s))
then
LogManager.debug(Poc.class, "!!!!! Login Attack detected. Generating alert.!!!"+$logMessages.size());
LogManager.debug(Poc.class, "Current Log Message: "+$logMessage.getEventName()+":"+(new Date($logMessage.getTime())));
int size = $logMessages.size();
for(int i = 0 ; i < size; i++) {
Object msgObj = $logMessages.get(i);
LogMessage msg = (LogMessage) msgObj;
LogManager.debug(Poc.class, "LogMessage: "+msg.getEventName()+":"+(new Date(msg.getTime())));
msg.setProcessed(true);
update(msgObj); // Does not work. Rule execution does not proceed beyond this point.
// retract(msgObj) // Does not work. Rule execution does not proceed beyond this point.
}
// Completed processing the logs over a given window. Now removing the processed logs.
//retract($logMessages) // Does not work. Rule execution does not proceed beyond this point.
end
The code to inject logs is as below. The code injects logs at every 3 secs and fires rules.
final StatefulKnowledgeSession kSession = kBase.newStatefulKnowledgeSession();
long msgId = 0;
while(true) {
// Generate Log messages every 3 Secs.
// Every alternate log message will satisfy a rule condition
LogMessage log = null;
log = new LogMessage();
log.setEventName("msg:"+msgId);
log.setAction(LogMessage.Action.Login);
LogManager.debug(Poc.class, "PUSHING LOG: "+log.getEventName()+":"+log.getTime());
kSession.insert(log);
kSession.fireAllRules();
LogManager.debug(Poc.class, "PUSHED LOG: "+log.getEventName()+":"+(new Date(log.getTime())));
// Sleep for 3 secs
try {
sleep(3*1000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
msgId++;
}
With this, what I could achieve is checking for existence of the above said LogMessage in last 10 secs. I could also find out the exact set of LogMessages which occurred in last 10 secs triggering the rule.
The problem is, once these messages are processed, they should not take part in next cycle of evaluation. This is something which I've not be able to achieve. I'll explain this with example.
Consider a timeline below, The timeline shows insertion of log messages and the state of alert generation which should happen.
Expected Result
Secs -- Log -- Alert
0 -- LogMessage1 -- No Alert
3 -- LogMessage2 -- No Alert
6 -- LogMessage3 -- Alert1 (LogMessage1, LogMessage2, LogMessage3)
9 -- LogMessage4 -- No Alert
12 -- LogMessage5 -- No Alert
15 -- LogMessage6 -- Alert2 (LogMessage4, LogMessage5, LogMessage6)
But whats happening with current code is
Actual Result
Secs -- Log -- Alert
0 -- LogMessage1 -- No Alert
3 -- LogMessage2 -- No Alert
6 -- LogMessage3 -- Alert1 (LogMessage1, LogMessage2, LogMessage3)
9 -- LogMessage4 -- Alert2 (LogMessage2, LogMessage3, LogMessage4)
12 -- LogMessage5 -- Alert3 (LogMessage3, LogMessage4, LogMessage5)
15 -- LogMessage6 -- Alert4 (LogMessage4, LogMessage5, LogMessage6)
Essentially, I am not able to discard the messages which are already processed and have taken part in an alert generation. I tried to use retract to remove the processed facts from its working memory. But when I added retract in the then part of the rule, the rules stopped firing at all. I have not been able to figure out why the rules stop firing after adding the retract.
Kindly let me know where am I going wrong.
You seem to be forgetting to set as processed the other 3 facts in the list. You would need a helper class as a global to do so because it should be done in a for loop. Otherwise, these groups of messages can trigger the rule as well:
1 no triggering
1,2 no triggerning
1,2,3 triggers
2,3,4 triggers because a new fact is added and 2 and 3 were in the list
3,4,5 triggers because a new fact is added and 3 and 4 were in the list
and so on
hope this helps
Related
I have next error, actually i don't understand it.
o.a.k.s.k.i.KStreamSlidingWindowAggregate - Skipping record for expired window. topic=[...] partition=[0] offset=[16880] timestamp=[1662556875000] window=[1662542475000,1662556875000] expiration=[1662556942000] streamTime=[1662556942000]
streamTime=[1662556942000]
timestamp=[1662556875000]
streamTime-timestamp = 67s
window size is 4hour.
grace period is 0
Why was record skipped and i didn't get a output message? it belongs to window. Yes record out-of-order
Update:
After read more about kafka-streams, i understand that on each message it creates two window:
(message time - window) and this window include message.
(message time + window) and this window exclude message.
Window 1 is expired. Window 2 don't. that's why i dind't see out message.
But logically it's wrong, message belong to window but i havn't a out message.
Example
sliding window time diff = 10, grace = 0
stream time = 0
send message (time = 10, key = 2) -> message key = 2; stream time = 10
send message (time = 4, key = 1) -> no out message;
send message (time = 5, key = 1) -> no out message;
last message belongs to window (stream-time - window-time)
------ restart stream -------
stream time = 0
send message (time = 10, key = 2) -> message key = 2; stream time = 10
send message (time = 4, key = 2) -> 2 message
In Kafka Streams sliding windows are event based. A new window is created each time a new record enters or drops from the window. It is defined by the record timestamp and a fixed duration.
Each record creates a window [record.timestamp - duration, record.timestamp] and
Each dropped record creates a window [record.timestamp + 1ms, record.timestamp + 1ms + duration].
(Be aware that other stream processing frameworks use a totally different definition of 'sliding windows')
The record is not included in the window when
stream-time > window-end + grace-period
(https://kafka.apache.org/27/javadoc/org/apache/kafka/streams/kstream/SlidingWindows.html)
For your initial example, the grace-period is zero and your window ends(at record timestamp) after the stream-time; thus the record is not included in the window.
For the second example, I am not sure. My guess is the records with key=1 are expired because the stream-time(10) has exceeded the record times(4,5)(and grace period=0). For the records with key=2, one window is created for the record with timestamp=10 and an update of the same window is emitted, because the record satisfies the condition above. However, no additional windows are created for the out-of-order record, because the grace period is zero.
I am using a Queue and hold block together, where the hold remains blocked until all the agents arrive at the Queue block.
How to change it and want to allow only a fixed number of agents (say 5 agents) at fixed intervals of time(say every 3 minutes)? Current properties of my Queue and hold block:
queue_block_properties
hold_block_properties
Create a cyclic event with recurrence time 3 minutes.
Also create a variable which you can name count of type int.
In the event action field write:
count = 0;
hold.unblock();
Then, in the On enter field of the hold block write the following:
count++;
if( count == 5 ) {
self.block();
}
The only question that I have is whether you want every 3 minutes to have exactly 5 agents leave or whether it's okay if they arrive a bit later. In other words if after 3 minutes, there are only 3 agents in the queue, do they leave and the hold remains unblocked in case another 2 arrive before the next cycle? Or does the hold block blocks again immediately?
In the solution I provided, if there are less than 5 at the cycle time occurrence, and then new agents arrive before the next cycle, they can pass.
Otherwise, create a new variable called target for example and write the following in the event action:
count= 0;
if( queue.size() >= 5 ) {
target = 5;
hold.unblock();
}
else if ( queue.size() > 0 ) {
target = queue.size();
hold.unblock();
}
And in the on enter of the hold, write:
count++;
if( count == target ) {
self.block();
target = 0;
}
I would advise to not use a hold block for such delicate control of releasing agents. Instead, I would propose a simpler solution.
Simply let the agents build up in the queue and then you remove them using an event. The only action for this event is to remove the minimum between your set number of agents and the queue size, from the queue and send them to an enter block. The enter block is where your process flow then continues. See the screenshot below.
Code in the event is
for (int i = 0; i < min(5, queue.size()); i ++){
enter.take(queue.remove(0));
}
On that note, you can also use the Wait block (Which is hidden in the Auxillary section in the PML library
Then you can ditch the enter block and simply call the following code
for (int i = 0; i < min(5, wait.size()); i ++){
wait.free(wait.get(0));
}
I am using FreeRTOS to dispatch task set of 4 periodical tasks.
All tasks have the same period of 10 time units, but they differ in their release times. Release times are 10,3,5,0 time units for tasks T1,T2,T3,T4 respectively. All 4 tasks are stored inside the linked list gll_t* pTaskList. Tasks should run e.g.,
t=0 T4 is released, t=3 T2 is released, t=5 T3 is released, t=10 T1 is relased and T4 is executed again since it was released at t = 0, and so on...
However I have two problems with my dispatcher code:
1. Problem At t=0, only T4 is ready, but note that T1 has release time at 10, According to my if statement for T1 I have 0 % (10 + 10) == 0, and T1 gets released even though it's not ready. I could introduce a boolean that tells whether a task has been released, but is there a smarter way to do it without introducing extra variables?
2. Problem At t=26, no tasks are ready, however, task T2 gets released. According to my if statement for T2 I have 26 % (3 + 10) == 0.
void prvTaskSchedulerProcess(void *pvParameters) {
...
uint32_t uCurrentTickCount = 0;
gll_t* pTaskList = (gll_t*) pvParameters;
WorkerTask_t* pWorkerTask = NULL;
while (true) {
for (uint8_t uIndex = 0; uIndex < pTaskList->size; uIndex++) {
pWorkerTask = gll_get(pTaskList, uIndex);
// Check if the task is ready to be executed
if ( (uCurrentTickCount % (pWorkerTask->uReleaseTime + pWorkerTask->uPeriod) ) == 0) ){
// Dispatch the ready task
vTaskResume(pWorkerTask->xHandle);
}
}
uCurrentTickCount++;
// Sleep the scheduler task, so the other tasks can run
vTaskDelay(TASK_SCHEDULER_TICK_TIME * SCHEDULER_OUTPUT_FREQUENCY_MS);
}
}
Using extra flags seems like a simple solution. However I was told that introducing flags variables is not best solution, because it makes code less readable and maintainable. Thus, I want to avoid using them. How would the correct task dispatching be achieved without using the extra flags (possibly correcting my if statement condition)?
Note using vTaskResume() in this way in inherently dangerous if there is even the tiniest little chance that the task you are resuming has not finished its previous actions and suspended itself again. See the API docs for a fuller explanation https://www.freertos.org/taskresumefromisr.html
To fix problem 2, use the following condition instead of what you have.
if ((uCurrentTickCount % pWorkerTask->uPeriod) == pWorkerTask->uReleaseTime)
To fix problem 1 (and problem 2), use the following condition.
if ((uCurrentTickCount >= pWorkerTask->uReleaseTime) &&
((uCurrentTickCount % pWorkerTask->uPeriod) == (pWorkerTask->uReleaseTime % pWorker->uPeriod)))
I'm trying to get a better understanding of golang channels. While reading this article I'm toying with non-blocking sends and have come up with the following code:
package main
import (
"fmt"
"time"
)
func main() {
stuff := make(chan int)
go func(){
for i := 0; i < 5; i ++{
select {
case stuff <- i:
fmt.Printf("Sent %v\n", i)
default:
fmt.Printf("Default on %v\n", i)
}
}
println("Closing")
close(stuff)
}()
time.Sleep(time.Second)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
}
This will print:
Default on 0
Default on 1
Default on 2
Default on 3
Default on 4
Closing
0
0
0
0
0
While I do understand that only 0s will get printed I do not really understand why the first send does still trigger the default branch of the select?
What is the logic behind the behavior of a select in this case?
Example at the Go Playground
You never send any values to stuff, you execute all the default cases before you get to any of the receive operations in the fmt.Println statements. The default case is taken immediately if there is no other operation than can proceed, which means that your loop will execute and return as quickly as possible.
You want to block the loop, so you don't need the default case. You don't need the close at the end either, because you're not relying on the closed channel unblocking a receive or breaking from a range clause.
stuff := make(chan int)
go func() {
for i := 0; i < 5; i++ {
select {
case stuff <- i:
fmt.Printf("Sent %v\n", i)
}
}
println("Closing")
}()
time.Sleep(time.Second)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
fmt.Println(<-stuff)
https://play.golang.org/p/k2rmRDP38f
Notice also that the last "Sent" and the "Closing" line aren't printed, because you have no other synchronization waiting for the goroutine to finish, however that doesn't effect the outcome of this example.
Since you're using a non-blocking 'send', the stuff <- i will really only be executed if there's a reader already waiting to read things on the channel, or if the channel has some buffer. If not, the 'send' would have to block.
Now since you have a time.Sleep(time.Second) before the print statements that read from the channel, there are no readers for the channel till after 1 second has passed. The goroutine on the other hand finishes executing within that time and doesn't send anything.
You're seeing all zeroes in the output because the fmt.Println(...) statements are reading from a closed channel.
Your first case isn't executing.
Here's what your program does:
Start a goroutine.
Attempt to send 0 through 4 on the channel, which all block, because there is nothing reading the channel, so fall through to the default.
Meanwhile, in the main goroutine, you're sleeping for one second...
Then after the second has elapsed, attempt to read from the channel, but it is closed, so you get 0 every time.
To get your desired behavior, you have two choices:
Use a buffered channel, which can hold all of the data you send:
stuff := make(chan int, 5)
Don't use default in your select statement, which will cause each send to wait until it can succeed.
Which is preferred depends on your goals. For a minimal example like this, either is probably no better or worse.
It only executes the default case, because the for loop runs 5 times before anything starts reading from the channel. Each time through, because nothing can read from the channel, it goes to the default case. If something could read from the channel, it would execute that case.
In order to gurantee Bounded wait in Test and set Instruction,following is the code given in Operating system book,Galvin -:
do {
1 waiting[i] = true;
2 while (waiting[i] && test_and_set(&lock)) ;
3 waiting[i] = false;
/* critical section */
4 j = (i + 1) % n;
5 while ((j != i) && !waiting[j])
6 j = (j + 1) % n;
7 if (j == i)
8 lock = false;
9 else
10 waiting[j] = false;
/* remainder section */
} while (true);
I am getting the complete code and concluded that
A process P_i will be in the critical section if either
Waiting [i]=false or test_and_set(&lock)=FALSE which ensures that Lock was FALSE previously. so Exit Section is either setting Waiting[j] or lock to FALSE.
But i have got some doubts-:
if in the exit section section it is found that same process again requests for critical section i.e
if j==i
then according to the code,that process have to start its execution form line number 2,i.e will execute
test_and_set(&lock))
in while loop and find the return value of test_and_set(&lock)) as false and then move to critical section.My doubt is that if same process wants to be in critical section ,is it necessary to start its exection right from line number 2
2.Now i want to do following Permutation and want to check the possible outcome.i want to swap line number 8 and 10
in line number 8 if i make
waiting[j]=false;
then also it will move to critical section even though lock =true now.
in line number 10 if i make
lock=false
then also it(process p_j) will move to critical section even though waiting[i]=true and i think it would be better because line number 3 will assign waiting[i]=false ,after the while loop breaks due to test_and_set(&lock)=false.
On the other hand if i make this change process have to execute test_and_set(&lock) which is time consuming
Is my assumption for point2 right?
what is the correct reason for point 1?
Thanks
Regarding point 1:
My doubt is that if same process wants to be in critical section ,is
it necessary to start its exection right from line number 2
A process is basically an program in execution. A process just cannot jump around choosing which instruction to execute next. The control flow decides that. The control of the code(which is the process itself) suggests that if a process successfully enters critical section, and again wants to enter critical section, then it will first execute lines 4 to 10 and then execute remainder section and would have to start execution right from line 1
Regarding your point 2
Now i want to do following Permutation and want to check the possible
outcome.i want to swap line number 8 and 10
If you swap the lines, the bounded waiting condition would no longer exist.
Proof
Suppose that only 1 process P(i) made the request to access Critical Section and it successfully entered. So
lock = true and waiting[i] = true
because only then it would have been able to come out of the for loop. Now it starts executing from line 4
Then j takes following values:
i + 2 , i + 3, ......0 , 1 , 2 , 3 , 4....i
wrap around of values occurs because of % operator. And Because no other process made request to enter critical section, waiting[j] = false for j != i
Therefore the condition while ((j != i) && !waiting[j]) becomes false when j equals i, and now we are at line 7. The new code is:
if (j == i)
waiting[j] = false;
else
lock = false;
Now if any process makes a request to enter critical section, then while (waiting[i] && test_and_set(&lock)) ; would always evaluate to true because lock is true and waiting[i] is also true and it would be stuck in spinlock. There would be no progress.