for the implementation of a unit test I need to setup a specific state of an object. As the state is implemented with a state machine, MDriven rejects the direct assignment of the state value to the attribute.
I could maybe trigger through the complete state machine until I reach the needed state but I assume that there is an easier way to set the state to a specific value that is rather hidden as it normlaly isn't supposed to work that way.
Does anybody know how this could be done?
Yes - read details here https://wiki.mdriven.net/index.php/StateMachineForceMode
But basically you set the state machine for an attribute into ForceMode - after this you can freely change the state value:
self.stateMachineForceMode('State');
self.State:='State3';
public void StateMachineForceState(string NewState)
{ //use with caution
string ForceMode = "self.stateMachineForceMode('TheStateAttribute')";
string close = "self.TheStateAttribute :='close'";
string open = "self.TheStateAttribute='open'";
Eco.Handles.DefaultEcoSpace es = this.AsIObject().ServiceProvider().GetEcoService<IEcoSpaceService>() as Eco.Handles.DefaultEcoSpace;
switch (NewState)
{
case "close":
es.ActionLanguage.Execute(this, ForceMode);
es.ActionLanguage.Execute(this, close);
break;
case "open":
es.ActionLanguage.Execute(this, ForceMode);
es.ActionLanguage.Execute(this, open);
break;
default:
break;
}
}
Related
I'm working on porting some of my View Models into (rough) Finite State Machines as my UI tends to fit that pattern rather well (Mealy/Moore, don't care for the purpose of this question). Additionally, when done well - state machines really clean up testing - as they prohibit certain test permutations from ever happening.
My current view models use RxSwift (and RxKotlin - depending on the app), and the underlying use cases (database calls, network calls, etc) also use Rx (hence why I need to stay in that ecosystem).
What I've discovered is that Rx is awesome, State Machines are awesome --> Rx + State Machines seem to be a bit of a hash to do anything non-trivial. For example, I know I can use the .scan operator to retain some state, IF my state machine was entirely synchronous (for example, something roughly like this in Swift):
enum Event {
case event1
case event2
case event3
}
enum State {
case state1
case state2
case state3
func on(event: Event) -> State {
switch (self, event) {
case (.state1, .event1):
// Do something
return .state2
case (.state2, .event2):
// Do something
return .state3
default:
return self // (or nil, or something)
}
}
}
func foo() -> Observable<State> {
let events = Observable<Event>.of(.event1, .event2, .event3)
return events.scan(State.state1) { (currentState, event) -> State in
return currentState.on(event)
}
}
But, what can I do if the return from my State.on function is an Observable (like a network call or something that takes a long time, which is already in Rx)?
enum State {
case notLoggedIn
case loggingIn
case loggedIn
case error
func on(event: Event) -> Observable<State> {
switch (self, event) {
case (.notLoggedIn, .event1):
return api.login(credentials)
.map({ (isLoggedIn) -> State in
if isLoggedIn {
return .loggedIn
}
return .error
})
.startWith(.loggingIn)
... other code ...
default:
return self
}
}
}
I've tried making the .scan operator take in an Observable accumulator, but the result of this code is that the state machine is subscribed to or run too many times. I guess because it runs on each state in the observable that is accumulating.
return events.scan(Observable.just(State.state1)) { (currentState, event) -> Observable<State> in
currentState.flatMap({ (innerState) -> Observable<State> in
return innerState.on(event: event)
})
}.flatMap { (states) -> Observable<State> in
return states
}
I think, if I could manage to cleanly pull the state variable back in, the simplest implementation could look like this:
return events.flatMapLatest({ (event) -> Observable<State> in
return self.state.on(event: event)
.do(onNext: { (state) in
self.state = state
})
})
But, pulling from a private state variable into an observable stream, and updating it - well, not only is it ugly, I feel like I'm just waiting to be hit by a concurrency bug.
Edit: Based on feedback from Sereja Bogolubov - I've added a Relay and come up with this code - still not great, but getting there.
let relay = BehaviorRelay<State>(value: .initial)
...
func transition(from state: State, on event: Event) -> Observable<State> {
switch (state, event) {
case (.notLoggedIn, .event1):
return api.login(credentials)
.map({ (isLoggedIn) -> State in
if isLoggedIn {
return .loggedIn
}
return .error
})
.startWith(.loggingIn)
... other code ...
default:
return self
}
}
return events.withLatestFrom(relay.asObservable(), resultSelector: { (event, state) -> Observable<State> in
return self.transition(from: state, on: event)
.do(onNext: { (state) in
self.relay.accept(state)
})
}).flatMap({ (states) -> Observable<State> in
return states
})
The relay (or replay subject or whatever) is updated in a doOnNext from the result of the state transition... This still feels like it could cause a concurrency problem, but not sure what else would work.
No, you don't have to be entirely sync to maintain arbitrary complex state.
Yes, there are ways to achive needed behavior without scan. How about the withLatestFrom, where other is your current state (i.e. a separate Observable<MyState>, but you would need ReplaySubject<MyState> under the hood).
Let me know if you need more details.
Proof of concept, javascript:
const source = range(0, 10);
const state = new ReplaySubject(1);
const example = source.pipe(
withLatestFrom(state), // that's the way you read actual state
map(([n, currentState]) => {
state.next(n); // that's the way you change the state
return ...
})
);
Please be aware that more sophisticated cases (like race conditions risky) might require something at least as complex as combineLatest and approp. Scheduler's in place.
I think Elm's system can come in handy here. In Elm, the reducer that you pass into the system doesn't just return state, it also returns a "command" which in our case would be a Observable<Event> (not an RxSwift.Event, but your Event enum.) This command isn't stored in the scan's state, but rather it is subscribed to outside the scan and its output is fed back into the scan (through a Subject of some sort.) Tasks that require cancelling would observe the current state and start and stop operation based on the state.
There are several libraries in the RxSwift ecosystem that help simplify these sort of things. The two primary ones are, ReactorKit and RxFeedback. And there are several others...
For a simple example of what I'm talking about, check out this gist. This sort of system allows your Moore machine to fire off an action upon entering a state which could potentially cause 0..n new input events.
I am working in a Service Fabric application that uses IReliableQueue. For the uses cases of this system, the IReliableConcurrentQueue makes sense to use and some local testing (i.e. basically by just changing the code to use IReliableConcurrentQueue instead of IReliableQueue - queue name does not change) shows great performance improvements. However, I am worried about the impact of changing this in a production system (i.e. upgrading). I can't find any docs or online questions (unless I just missed them) about these considerations. For example, in this system, the existing IReliableQueue will almost always have items. So what happens to that data when I upgrade the SF application? Will it be available to dequeue in the IReliableConcurrentQueue? Or would data be lost? I know I can "just try it" but wanted to see if someone out there had done the same or could offer pointers to existing resources. Thanks!
Sorry for a late answer (that you probably don't need anymore but still).
When we calling GetOrAddAsync method on IReliableStateManager we aren't retrieving the interface to store values - we actually creating an instance of reliable collection. This basically means that type of the interface we specify is very important.
Taking this into account if we do this:
Service v. 1.0
// Somewhere in RunAsync for example
await this.StateManager.GetOrAddAsync<IReliableQueue<long>>("MyCollection")
Then doing this in the next version:
Service v. 1.1
// Somewhere in RunAsync for example
await this.StateManager.GetOrAddAsync<IReliableConcurrentQueue<long>>("MyCollection")
will throw an exception:
Returned reliable object of type Microsoft.ServiceFabric.Data.Collections.DistributedQueue`1[System.Int64] cannot be casted to requested type Microsoft.ServiceFabric.Data.Collections.IReliableConcurrentQueue`1[System.Int64]
and then:
System.ExecutionEngineException: 'Exception of type 'System.ExecutionEngineException' was thrown.'
The above exception looks like a bug so I have filled one.
UPDATE 2019.06.28
It turned out that appearance of System.ExecutionEngineException isn't a bug but rather an undocumented behavior of Environment.FailFast method in combination with Visual Studio debugger.
Please see my comment to the above issue.
This is what would happen.
There are plenty ways to overcome this.
Here is the most obvious one:
Example
var migrate = false; // This flag indicates whether the migration was already done.
var migrateValues = new List<long>();
var applicationFlags = await this.StateManager
.GetOrAddAsync<IReliableDictionary<string, bool>>("application-flags");
using (var transaction = this.StateManager.CreateTransaction())
{
var flag = await applicationFlags
.TryGetValueAsync(transaction, "queue-to-concurrent-queue-migration");
if (!flag.HasValue || !flag.Value)
{
var queue = await this.StateManager
.GetOrAddAsync<IReliableQueue<long>>("value-collection");
for (;;)
{
var c = await queue.TryDequeueAsync(transaction);
if (!c.HasValue)
{
break;
}
migrateValues.Add(c.Value);
}
migrate = true;
}
}
if (migrate)
{
await this.StateManager.RemoveAsync("value-collection");
using (var transaction = this.StateManager.CreateTransaction())
{
var concurrentQueue = await this.StateManager
.GetOrAddAsync<IReliableConcurrentQueue<long>>("value-collection");
foreach (var i in migrateValues)
{
await concurrentQueue.EnqueueAsync(transaction, i);
}
await applicationFlags.AddOrUpdateAsync(
transaction,
"queue-to-concurrent-queue-migration",
true,
(s, b) => true);
}
await transaction.CommitAsync();
}
Please note that this code is just an illustrative example and should be properly tested before applying it to real life application.
I have a program design question in FreeRTOS:
I have a state machine with 4 states, and 6 tasks. In each state, different tasks must be executed, excepting Task1, which is always active:
State 1: Task1, Task2, Task3
State 2: Task1, Task2, Task3, Task4
State 3: Task1, Task5
State 4: Task1, Task6
Task1, Task3, Task4, Task5 and Task6 are periodic, and each one reads a different sensor.
Task2 is aperiodic, it sends a GPRS alarm only if a threshold is reached.
The switching between the states is determined by events from the sensor input of each task.
The initial approach for the design of main() is to have a switch to control the states, and depending on the state, suspend and activate the corresponding tasks:
void main ()
{
/* initialisation of hw and variables*/
system_init();
/* creates FreeRTOS tasks and suspends all tasks except Task1*/
task_create();
/* Start the scheduler so FreeRTOS runs the tasks */
vTaskStartScheduler();
while(true)
{
switch STATE:
case 1:
suspend(Task4, Task5, Task6);
activate(Task2, Task3);
break;
case 2:
suspend(Task5, Task6);
activate(Task2, Task3, Task4);
break;
case 3:
suspend(Task2, Task3, Task4, Task6);
activate(Task5);
break;
case 4:
suspend(Task2, Task3, Task4, Task5);
activate(Task6);
break;
}
}
My question is: where should I call vTaskStartScheduler(), in relation with the switch? It seems to me that in this code, once the vTaskStartScheduler is called, the program will never enter the switch statement.
Should I create another task always active to control the state machine, which has the previous while and switch statements inside, such as the following pseudocode?
task_control()
{
while(true)
{
switch STATE:
case 1:
suspend(Task4, Task5, Task6);
execute(Task2, Task3);
and so on...
}
}
Any advice will be much appreciated...
To answer your question, vTaskStartScheduler() will, as the name suggests, start the scheduler. Any code after it will only execute when the scheduler is stopped, which in most cases is when the program ends, so never. This is why your switch won't run.
As you have already eluded to, for your design you could use a 'main' task to control the others. You need to have created this and registered it with the scheduler before calling vTaskStartScheduler().
On a side note, if you do go with this approach, you only want to suspend/resume your tasks on first entry into a state, not on every iteration of the 'main' task.
For example:
static bool s_first_state_entry = true;
task_control()
{
while (true)
{
switch (STATE)
{
case 1:
if (s_first_state_entry)
{
// Only do this stuff once
s_first_state_entry = false;
suspend(Task4, Task5, Task6);
execute(Task2, Task3);
}
// Do this stuff on every iteration
// ...
break;
default:
break;
}
}
}
void set_state(int state)
{
STATE = state;
s_first_state_entry = true;
}
As Ed King metioned, your solution contains a major design flaw. That is - after starting scheduler, no code that's specified after it in the main function will ever execute until the scheduler is stopped.
I suggest implementing your state logic in Idle task (remember to include delays in your tasks not to starve the Idle hook from processing time). Idle task could block and unblock the tasks depending on the current state by menas of semaphores. Remember though, that the Idle hook is a task with a lowest possible priority, so be careful when designing your system. The solution suggested by me may be completely wrong when the tasks consume most of the processing time not allowing the Idle task to switch states.
Alternatively, you can create a superior task as mentioned by Ed King, with the highest priority.
To be honest, everything depends on what the task are really doing.
I have two Actions with the same input/output/error types, and I'd like to compose them into a single Action that runs whichever of the two is enabled (with an arbitrary tie-breaker if they both are).
Here's my first, failing, attempt:
let addOrRemove: Action<MyInput, MyOutput, APIRequestError> = Action(enabledIf: add.isEnabled.or(remove.isEnabled)) { input in
if add.isEnabled.value {
return add.apply(input)
} else {
return remove.apply(input)
}
}
This fails because the inner add.apply(input) can't see that I checked add.isEnabled, so it wraps an additional ActionError<> layer around the error type. (This might be legit, as I'm not sure how thread-safe this approach would be, or might be a case of us knowing something the type system doesn't.) The corresponding type error is:
cannot convert return expression of type 'SignalProducer<MyOutput, ActionError<APIRequestError>>' to return type 'SignalProducer<MyOutput, APIRequestError>'
What should I do instead?
Github user #ikesyo provided the following answer on the ReactiveSwift issue I opened to ask the same question:
let producer: SignalProducer<MyOutput, ActionError<APIRequestError>>
if add.isEnabled.value {
producer = add.apply(input)
} else {
producer = remove.apply(input)
}
return producer.flatMapError { error in
switch error {
case .disabled: return .empty
case let .producerFailed(inner): return SignalProducer(error: inner)
}
}
If they show up here I'll happily change the accepted answer to theirs (credit where it belongs).
Warning: If I'm reading this answer correctly, it's not watertight. If add changes from enabled to disabled between the apply() and the start() of the wrapping Action, we'll get "success" (no values, but .completed) instead of the .disabled we should get. That's good enough for my use case, but YMMV.
I'm currently polling my CFReadStream for new data with CFReadStreamHasBytesAvailable.
(First, some background: I'm doing my own threading and I don't want/need to mess with runloop stuff, so the client callback stuff doesn't really apply here).
My question is: what are accepted practices for polling?
Apple's documentation on the subject doesn't seem too helpful.
They recommend to "do something else while you wait". I'm currently just doing something along the lines of:
while(!done)
{
if(CFReadStreamHasBytesAvailable(readStream))
{
CFReadStreamRead(...) ... bla bla bla
} else {
usleep(3600); // I made this up
sched_yield(); // also made this up
continue;
}
}
Is the usleep and the sched_yield "good enough"? In there a "good" number to sleep for in usleep?
(Also: yes, because this is running in my own thread, I could just block on CFReadStreamRead - which would be great but I'm also trying to snag upload progress as well as download progress, so blocking there wouldn't help...).
Any insight would be much appreciated - thanks!
I think this question is a bit of a paradox because you're asking what the best practices are for doing something that's intrinsically not a best practice ;)
When there's a perfectly good method for blocking on network I/O, any compromise that causes you to poll instead is by definition not the best practice.
That said, if you do poll I think it might be more appropriate to "run the runloop until date" on your thread, instead of using whatever posix sleep or yield method you're imagining. Remember that each thread gets its own runloop, so essentially by running the runloop you're allowing Apple to employ its concept of best practices for blocking until a future date.
As for the time delay, I don't know if you'll get a definitive answer for what a good time is. It's a tradeoff between peppering the CPU with polling cycles vs. being stuck in the runloop for a little while when I/O is ready to be read from the network.
Ideally I think I would refocus your efforts on making this work using I/O blocking calls, but if you stick with the poll & idle technique, don't fret too much about the specific delay time. Just pick something that works and doesn't seem to impact performance negatively in either direction.
(Also, I'd like to clarify that I'm not too religious about the polling vs. blocking thing, I'm only stressing its value because you're obviously in search of an elevated solution).
When doing manual CFStream based connections on a separate thread (for custom things like bandwidth monitoring and throttling), I use a combination of CFReadStreamScheduleWithRunLoop, CFRunLoopRunInMode and CFReadStreamSetClient. Basically I run for 0.25 seconds and then check stream status. The client callback also gets notified on its own as well. This allows me to periodically check read status and do some custom behavior but rely mostly on (stream) events.
static const CFOptionFlags kMyNetworkEvents =
kCFStreamEventOpenCompleted
| kCFStreamEventHasBytesAvailable
| kCFStreamEventEndEncountered
| kCFStreamEventErrorOccurred;
static void MyStreamCallBack(CFReadStreamRef readStream, CFStreamEventType type, void *clientCallBackInfo) {
[(id)clientCallBackInfo _handleNetworkEvent:type];
}
- (void)connect {
...
CFStreamClientContext streamContext = {0, self, NULL, NULL, NULL};
BOOL success = CFReadStreamSetClient(readStream_, kMyNetworkEvents, MyStreamCallBack, &streamContext);
CFReadStreamScheduleWithRunLoop(readStream_, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
if (!CFReadStreamOpen(readStream_)) {
// Notify error
}
while(!cancelled_ && !finished_) {
SInt32 result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, NO);
if (result == kCFRunLoopRunStopped || result == kCFRunLoopRunFinished) {
break;
}
if (([NSDate timeIntervalSinceReferenceDate] - lastRead_) > MyConnectionTimeout) {
// Call timed out
break;
}
// Also handle stream status CFStreamStatus status = CFReadStreamGetStatus(readStream_);
if (![self _handleStreamStatus:status]) break;
}
CFRunLoopStop(CFRunLoopGetCurrent());
CFReadStreamSetClient(readStream_, 0, NULL, NULL);
CFReadStreamUnscheduleFromRunLoop(readStream_, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
CFReadStreamClose(readStream_);
}
- (void)_handleNetworkEvent:(CFStreamEventType)type {
switch(type) {
case kCFStreamEventOpenCompleted:
// Notify connected
break;
case kCFStreamEventHasBytesAvailable:
[self _handleBytes];
break;
case kCFStreamEventErrorOccurred:
[self _handleError];
break;
case kCFStreamEventEndEncountered:
[self _handleBytes];
[self _handleEnd];
break;
default:
Debug(#"Received unexpected CFStream event (%d)", type);
break;
}
}