I'm using PyCLIPS to integrate CLIPS in a program that should act as a ECA-Server (event-conditon-action).
There are incoming events that, together with the system state, may or may not trigger rules that then emit actions on a message bus.
The system state manifests itself in form of instances whose slots are being modified depending on the incoming events.
The software is intended to be a long-lived service, but when an error occurs during the execution of a rule for example through a misnamed handler:
ERROR: [MSGFUN1] No applicable primary message-handlers found for event-handler.
[PRCCODE4] Execution halted during the actions of defrule event-rule.
The clips session becomes unresponsive to new messages. Slots are no longer updated using:
clips_instance.Send(event, event_args)
There is nothing happening in clips even with clips.DebugConfig.WatchAll() there is no debug output.
Example:
>>> import clips
>>> clips.Build("(defclass POINT (is-a USER) (slot x) (slot y))")
>>> clips_instance = clips.BuildInstance("p1","POINT","(x 3) (y 5)")
>>> clips_instance.Send("get-x","")
<Integer 3>
>>> clips_instance.Send("get-z","")
<Symbol 'FALSE'>
>>> clips_instance.Send("get-y","")
<Symbol 'FALSE'>
>>>
Is there any possibility to avoid this or recover from this state?
Related
I'm trying to use pytransitions to implement retransmit logic from an initialization state. The summary is that during the init state if the other party isn't responding after 1 second resend the packet. This is very similar to what I see here: https://github.com/pytransitions/transitions/pull/461
I tried this patch, and even though I see the timeouts/failures happening, my callback is only called the first time. This is true with before/after and on_enter/exit. No matter what I've tried, I can't get the retransmit to occur again. Any ideas?
Even though this question is a bit dated I'd like to post an answer since Retry states have been added to transitions in release 0.9.
Retry itself will only count how often a state has been re-entered meaning that the counter will increase when transition source and destination are equal and reset otherwise. It's entirely passive and need another mean to trigger events. The Timeout state extension is commonly used in addition to Retry to achieve this. In the example below a state machine is decorated with Retry and Timeout state extensions which allows to use a couple of keywords for state definitions:
timeout - time in seconds before a timeout is triggered after a state has been entered
on_timeout- the callback(s) called when timeout was triggered
retries - the number of retries before failure callbacks are called when a state is re-entered
on_failure - the callback(s) called when the re-entrance counter reaches retries
The example will re-enter pinging unless a randomly generated number between 0 and 1 is larger than 0.8. This can be interpreted as a server that roughly answers only every fifth request. When you execute the example the retries required to reach 'initialized' can vary or even fail when retries are reached.
from transitions import Machine
from transitions.extensions.states import add_state_features, Retry, Timeout
import random
import time
# create a custom machine with state extension features and also
# add enter callbacks for the states 'pinging', 'initialized' and 'init_failed'
#add_state_features(Retry, Timeout)
class RetryMachine(Machine):
def on_enter_pinging(self):
print("pinging server...")
if random.random() > 0.8:
self.to_initialized()
def on_enter_initialized(self):
print("server answered")
def on_enter_init_failed(self):
print("server did not answer!")
states = ["init",
{"name": "pinging",
"timeout": 0.5, # after 0.5s we assume the "server" wont answer
"on_timeout": "to_pinging", # when timeout enter 'pinging' again
"retries": 3, # three pinging attempts will be conducted
"on_failure": "to_init_failed"},
"initialized",
"init_failed"]
# we don't pass a model to the machine which will result in the machine
# itself acting as a model; if we add another model, the 'on_enter_<state>'
# methods must be defined on the model and not machine
m = RetryMachine(states=states, initial="init")
assert m.is_init()
m.to_pinging()
while m.is_pinging():
time.sleep(0.2)
I am trying to use the RxJava caching mechanism ( RxJava2 ) but i can't seem to catch how it works or how can i control the cached contents since there is the cache operator.
I want to verify the cached data with some conditions before emitting the new data.
for example
someObservable.
repeat().
filter { it.age < maxAge }.
map(it.name).
cache()
How can i check and filter the cache value and emit it if its succeeds and if not then i will request a new value.
since the value changes periodically i need to verify if the cache is still valid before i can request a new one.
There is also ObservableCache<T> class but i can't find any resources of using it.
Any help would be much appreciated. Thanks.
This is not how replay/ cache works. Please read the #replay/ #cache documentation first.
replay
This operator returns a ConnectableObservable, which has some methods (#refCount/ #connect/ #autoConnect) for connecting to the source.
When #replay is applied without an overload, the source subscription is multicasted and all emitted values sind connection will be replayed. The source subscription is lazy and can connect to the source via #refCount/ #connect/ #autoConnect.
Returns a ConnectableObservable that shares a single subscription to the underlying ObservableSource that will replay all of its items and notifications to any future Observer.
Applying #relay without any connect-method (#refCount/ #connect/ #autoConnect) will not emit any values on subscription
A Connectable ObservableSource resembles an ordinary ObservableSource, except that it does not begin emitting items when it is subscribed to, but only when its connect method is called.
replay(1)#autoConnect(-1) / #refCount(1) / #connect
Applying replay(1) will cache the last value and will emit the cached value on each subscription. The #autoConnect will connect open an connection immediately and stay open until a terminal event (onComplete, onError) happens. #refCount is smiular, but will disconnect from the source, when all subscriber disappear. The #connect opreator can be used, when you need to wait, when alle subscriptions have been done to the observable, in order not to miss values.
usage
#replay(1) -- most of the it should be used at the end of the observable.
sourcObs.
.filter()
.map()
.replay(bufferSize)
.refCount(connectWhenXSubsciberSubscribed)
caution
applying #replay without a buffer-limit or expiration date will lead to memory-leaks, when you observale is infinite
cache / cacheWithInitialCapacity
Operators are similar to #replay with autoConnect(1). The operators will cache every value and replay on each subsciption.
The operator subscribes only when the first downstream subscriber subscribes and maintains a single subscription towards this ObservableSource. In contrast, the operator family of replay() that return a ConnectableObservable require an explicit call to ConnectableObservable.connect().
Note: You sacrifice the ability to dispose the origin when you use the cache Observer so be careful not to use this Observer on ObservableSources that emit an infinite or very large number of items that will use up memory. A possible workaround is to apply takeUntil with a predicate or another source before (and perhaps after) the application of cache().
example
#Test
fun skfdsfkds() {
val create = PublishSubject.create<Int>()
val cacheWithInitialCapacity = create
.cacheWithInitialCapacity(1)
cacheWithInitialCapacity.subscribe()
create.onNext(1)
create.onNext(2)
create.onNext(3)
cacheWithInitialCapacity.test().assertValues(1, 2, 3)
cacheWithInitialCapacity.test().assertValues(1, 2, 3)
}
usage
Use cache operator, when you can not control the connect phase
This is useful when you want an ObservableSource to cache responses and you can't control the subscribe/dispose behavior of all the Observers.
caution
As with replay() the cache is unbounded and could lead to memory-leaks.
Note: The capacity hint is not an upper bound on cache size. For that, consider replay(int) in combination with ConnectableObservable.autoConnect() or similar.
further reading
https://blog.danlew.net/2018/09/25/connectable-observables-so-hot-right-now/
https://blog.danlew.net/2016/06/13/multicasting-in-rxjava/
If your event source (Observable) is an expensive operation, such as reading from a database, you shouldn't use Subject to observe the events, since that will repeat the expensive operation for each subscriber. Caching can also be risky with infinite streams due to "OutOfMemory" exceptions. A more appropriate solution may be ConnectableObservable, which only performs the source operation once, and broadcasts the updated value to all subscribers.
Here is a code sample. I didn't bother creating an infinite periodic stream or including error handling to keep the example simple. Let me know if it does what you need.
class RxJavaTest {
private final int maxValue = 50;
private final ConnectableObservable<Integer> source =
Observable.<Integer>create(
subscriber -> {
log("Starting Event Source");
subscriber.onNext(readFromDatabase());
subscriber.onNext(readFromDatabase());
subscriber.onNext(readFromDatabase());
subscriber.onComplete();
log("Event Source Terminated");
})
.subscribeOn(Schedulers.io())
.filter(value -> value < maxValue)
.publish();
void run() throws InterruptedException {
log("Starting Application");
log("Subscribing");
source.subscribe(value -> log("Subscriber 1: " + value));
source.subscribe(value -> log("Subscriber 2: " + value));
log("Connecting");
source.connect();
// Add sleep to give event source enough time to complete
log("Application Terminated");
sleep(4000);
}
private Integer readFromDatabase() throws InterruptedException {
// Emulate long database read time
log("Reading data from database...");
sleep(1000);
int randomValue = new Random().nextInt(2 * maxValue) + 1;
log(String.format("Read value: %d", randomValue));
return randomValue;
}
private static void log(Object message) {
System.out.println(
Thread.currentThread().getName() + " >> " + message
);
}
}
Here's the output:
main >> Starting Application
main >> Subscribing
main >> Connecting
main >> Application Terminated
RxCachedThreadScheduler-1 >> Starting Event Source
RxCachedThreadScheduler-1 >> Reading data from database...
RxCachedThreadScheduler-1 >> Read value: 88
RxCachedThreadScheduler-1 >> Reading data from database...
RxCachedThreadScheduler-1 >> Read value: 42
RxCachedThreadScheduler-1 >> Subscriber 1: 42
RxCachedThreadScheduler-1 >> Subscriber 2: 42
RxCachedThreadScheduler-1 >> Reading data from database...
RxCachedThreadScheduler-1 >> Read value: 37
RxCachedThreadScheduler-1 >> Subscriber 1: 37
RxCachedThreadScheduler-1 >> Subscriber 2: 37
RxCachedThreadScheduler-1 >> Event Source Terminated.
Note the following:
Events only start firing once connect() is called on the source, not when observers subscribe to the source.
Database calls are only made once per event update
Filtered values are not emitted to subscribers
All subscribers are executed in the same thread
Application terminates before the events are processed due to concurrency. Normally your app will run in an event loop, so your app will remain responsive during slow operations.
I am using a gen_server behaviour and trying to understand how can handle_info/2 be triggered from a timeout occurring in a handle_call for example:
-module(server).
-export([init/1,handle_call/3,handle_info/2,terminate/2).
-export([start/0,stop/0]).
init(Data)->
{ok,33}.
start()->
gen_server:start_link(?MODULE,?MODULE,[]).
stop(Pid)->
gen_server:stop(Pid).
handle_call(Request,From,State)->
Return={reply,State,State,5000},
Return.
handle_info(Request,State)->
{stop,Reason,State}.
terminate(Reason,State)->
{ok,S}=file:file_open("D:/Erlang/Supervisor/err.txt",[read,write]),
io:format(S,"~s~n",[Reason]),
ok.
What i want to do:
I was expecting that if I launch the server and would not use gen_server:call/2 for 5 seconds (in my case) then handle_info would be called, which would in turn issue the stop thus calling terminate.
I see it does not happen this way, in fact handle_info is not called at all.
In examples such as this i see the timeout is set in the return of init/1.What I can deduce is that it handle_info gets triggered only if I initialize the server and issue nothing (nor cast nor call for N seconds).If so why I can provide Timeout in the return of both handle_cast/2 and handle_call/3 ?
Update:
I was trying to get the following functionality:
If no call is issued in X seconds trigger handle_info/2
If no cast is issued in Y seconds trigger handle_info/2
I thought this timeouts can be set in the return of handle_call and handle_cast:
{reply,Reply,State,X} //for call
{noreply,State,Y} //for cast
If not, when are those timeouts triggered since they are returns?
To initiate timeout handling from gen_server:handle_call/3 callback, this callback has to be called in the first place. Your Return={reply,State,State,5000}, is not executed at all.
Instead, if you want to “launch the server and would not use gen_server:call/2 for 5 seconds then handle_info/2 would be called”, you might return {ok,State,Timeout} tuple from gen_server:init/1 callback.
init(Data)->
{ok,33,5000}.
You cannot set the different timeouts for different calls and casts. As stated by Alexey Romanov in comments,
Having different timeouts for different types of messages just isn’t something any gen_* behavior does and would have to be simulated by maintaining them inside state.
If one returns {reply,State,Timeout} tuple from any handle_call/3/handle_cast/2, the timeout will be triggered if the mailbox of this process is empty after Timeout.
i suggest you read source code:gen_server.erl
% gen_server.erl
% line 400
loop(Parent, Name, State, Mod, Time, HibernateAfterTimeout, Debug) ->
Msg = receive
Input ->
Input
after Time ->
timeout
end,
decode_msg(Msg, Parent, Name, State, Mod, Time, HibernateAfterTimeout, Debug, false).
it helps you to understand the parameter Timeout
I'm trying to use delay and amb to execute a sequence of the same task separated by time.
All I want is for a download attempt to execute some time in the future only if the same task failed before in the past. Here's how I have things set up, but unlike what I'd expect, all three downloads seem to execute without delay.
Observable.amb([
Observable.catch(redditPageStream, Observable.empty()).delay(0 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(30 * 1000),
Observable.catch(redditPageStream, Observable.empty()).delay(90 * 1000),
# Observable.throw(new Error('Failed to retrieve reddit page content')).delay(10000)
# Observable.create(
# (observer) ->
# throw new Error('Failed to retrieve reddit page content')
# )
]).defaultIfEmpty(Observable.throw(new Error('Failed to retrieve reddit page content')))
full code can be found here. src
I was hoping that the first successful observable would cancel out the ones still in delay.
Thanks for any help.
delay doesn't actually stop the execution of what ever you are doing it just delays when the events are propagated. If you want to delay execution you would need to do something like:
redditPageStream.delaySubscription(1000)
Since your source is producing immediately the above will delay the actual subscription to the underlying stream to effectively delay when it begins producing.
I would suggest though that you use one of the retry operators to handle your retry logic though rather than rolling your own through the amb operator.
redditPageStream.delaySubscription(1000).retry(3);
will give you a constant retry delay however if you want to implement the linear backoff approach you can use the retryWhen() operator instead which will let you apply whatever logic you want to the backoff.
redditPageStream.retryWhen(errors => {
return errors
//Only take 3 errors
.take(3)
//Use timer to implement a linear back off and flatten it
.flatMap((e, i) => Rx.Observable.timer(i * 30 * 1000));
});
Essentially retryWhen will create an Observable of errors, each event that makes it through is treated as a retry attempt. If you error or complete the stream then it will stop retrying.
// 1 fixed thread
implicit val waitingCtx = scala.concurrent.ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
// "map" will use waitingCtx
val ss = (1 to 1000).map {n => // if I change it to 10 000 program will be stopped at some point, like locking forever
service1.doServiceStuff(s"service ${n}").map{s =>
service1.doServiceStuff(s"service2 ${n}")
}
}
Each doServiceStuff(name:String) takes 5 seconds. doServiceStuff does not have implicit ex:Execution context as parameter, it uses its own ex context inside and does Future {blocking { .. }} on it.
In the end program prints:
took: 5.775849753 seconds for 1000 x 2 stuffs
If I change 1000 to 10000 in, adding even more tasks : val ss = (1 to 10000) then program stops:
~17 027 lines will be printed (out of 20 000). No "ERROR" message
will be printed. No "took" message will be printed
**And will not be processing any futher.
But if I change exContext to ExecutionContext.fromExecutor(null: Executor) (global one) then in ends in about 10 seconds (but not normally).
~17249 lines printed
ERROR: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
took: 10.646309398 seconds
That's the question
: Why with fixed ex-context pool it stops without messaging, but with global ex-context it terminates but with error and messaging?
and sometimes.. it is not reproducable.
UPDATE: I do see "ERROR" and "took" if I increase pool from 1 to N. Does not matter how hight N is - it sill will be the ERROR.
The code is here: https://github.com/Sergey80/scala-samples/tree/master/src/main/scala/concurrency/apptmpl
and here, doManagerStuff2()
I think I have an idea of what's going on. If you squint enough, you'll see that map duty is extremely lightweight: just fire off a new future (because doServiceStuff is a Future). I bet the behavior will change if you switch to flatMap, which will actually flatten the nested future and thus will wait for second doServiceStuff call to complete.
Since you're not flattening out these futures, all your awaits downstream are awaiting on a wrong thing, and you are not catching it because here you're discarding whatever Service returns.
Update
Ok, I misinterpreted your question, although I still think that that nested Future is a bug.
When I try your code with both executors with 10000 task I do get OutOfMemory when creating threads in ForkJoin execution context (i.e. for service tasks), which I'd expect. Did you use any specific memory settings?
With 1000 tasks they both do complete successfully.