I am using a GraphMachine to model a workflow of a MongoDB record.
I am only storing the state in MongoDB and when I am reloading at a later time, I use the set_state() option on the machine to force it back to where it was left off.
This all works correctly except when I try to show the state machine graph.
After loading it always shows itself in the initial state even though it seems it did accept the set_state because transitions are accepted as if it was in the restored state.
Lets say I have a simple linear FSM like: S0 -> S1 -> S2 -> S3 -> S3 -> S0.
S0 is the initial state, and S2 is where it was saved.
When I restore, it always graphs itself in S0, but if I try to make the S2->S3 transition, it accepts it. When I make the graph afterwards, it is in the correct S3 state.
Is there a way I can make the GraphMachine 'initialize' to the correct state?
Thanks
Machine.set_state will hard set the model state but won't call necessary callbacks to regenerate the graph. You can either pass the initial state to the constructor or force a recreation of the graph after set_state:
from transitions.extensions import GraphMachine
states = ["A", "B", "C"]
m1 = GraphMachine(states=states, initial="A", ordered_transitions=True, show_state_attributes=True)
m1.next_state()
m2 = GraphMachine(states=states, initial=m1.state, ordered_transitions=True)
m2.get_graph().draw("machine2.png")
m1.set_state("C")
m1.get_graph(force_new=True).draw("machine1.png")
Related
I am using ag-grid/ag-grid-angular to provide an editable grid of data backed by a database. When a user edits a cell I want to be able to post the update to the backend service and if the request is successful update the grid and if not undo the user's changes and show an error.
I have approached this problem from a couple different angles but have yet to find the solution that meets all my requirements and am also curious about what the best practice would be to implement this kind of functionality.
My first thought was to leverage the cellValueChanged event. With this approach I can see the old and new values and then make a call to my service to update the database. If the request is successful then everything is great and works as expected. However, if the request fails for some reason then I need to be able to undo the user's changes. Since I have access to the old value I can easily do something like event.node.setDataValue(event.column, event.oldValue) to revert the user's changes. However, since I am updating the grid again this actually triggers the cellValueChanged event a second time. I have no way of knowing that this is the result of undoing the user's changes so I unnecessarily make a call to my service again to update the data even though the original request was never successful in updating the data.
I have also tried using a custom cell editor to get in between when the user is finished editing a cell and when the grid is actually updated. However, it appears that there is no way to integrate an async method in any of these classes to be able to wait for a response from the server to decide whether or not to actually apply the user's changes. E.g.
isCancelBeforeStart(): boolean {
this.service.updateData(event.data).subscribe(() => {
return false;
}, error => {
return true;
});
}
does not work because this method is synchronous and I need to be able to wait for a response from my service before deciding whether to cancel the edit or not.
Is there something I am missing or not taking in to account? Or another way to approach this problem to get my intended functionality? I realize this could be handled much easier with dedicated edit/save buttons but I am ideally looking for an interactive grid that is saving the changes to the backend as the user is making changes and providing feedback in cases where something went wrong.
Any help/feedback is greatly appreciated!
I understand what you are trying to do, and I think that the best approach is going to be to use a "valueSetter" function on each of your editable columns.
With a valueSetter, the grid's value will not be directly updated - you will have to update your bound data to have it reflected in the grid.
When the valueSetter is called by the grid at the end of the edit, you'll probably want to record the original value somehow, update your bound data (so that the grid will reflect the change), and then kick off the back-end save, and return immediately from the valueSetter function.
(It's important to return immediately from the valueSetter function to keep the grid responsive. Since the valueSetter call from the grid is synchronous, if you try to wait for the server response, you're going to lock up the grid while you're waiting.)
Then, if the back-end update succeeds, there's nothing to do, and if it fails, you can update your bound data to reflect the original value.
With this method, you won't have the problem of listening for the cellValueChanged event.
The one issue that you might have to deal with is what to do if the user changes the cell value, and then changes it again before the first back-end save returns.
onCellValueChanged: (event) => {
if (event.oldValue === event.newValue) {
return;
}
try {
// apiUpdate(event.data)
}
catch {
event.node.data[event.colDef.Field] = event.oldValue;
event.node.setDataValue(event.column, event.oldValue);
}
}
By changing the value back on node.data first, when setDataValue() triggers the change event again, oldValue and newValue are actually the same now and the function returns, avoiding the rather slow infinite loop.
I think it's because you change the data behind the scenes directly without agGrid noticing with node.data = , then make a change that agGrid recognises and rerenders the cell by calling setDataValue. Thereby tricking agGrid into behaving.
I would suggest a slightly better approach than StangerString, but to credit him the idea came from his approach. Rather than using a test of the oldValue/newValue and allowing the event to be called twice, you can go around the change detection by doing the following.
event.node.data[event.colDef.field] = event.oldValue;
event.api.refreshCells({ rowNodes: [event.node], columns: [event.column.colId] });
What that does is sets the data directly in the data store used by aggrid, then you tell it to refresh that grid. That will prevent the onCellValueChanged event from having to be called again.
(if you arent using colIds you can use the field or pass the whole column, I think any of them work)
I have events (ProductOrderRequested, ProductColorChanged, ProductDelivered...) and I want to build a golden record of my product.
But my goal is to build the golden record step by step : each session of activity will give me an updated state of my product and I need to store each version of the state for tracability purpose
I have a quite simple pipeline (code is better than words) :
events
.apply("SessionWindow", Window.
<KV<String, Event>>into(Sessions.withGapDuration(gapSession)
.triggering(<early and late data trigger>))
.apply("GroupByKey", GroupByKey.create())
.apply("ComputeState", ParDo.of(new StatefulFn()))
My problem is for a given window I have to compute the new state based on :
The previous state (i.e computed state of the previous window)
The events received
I would like to avoid calling an external service to get the previous state but instead get the state of the previous window. Is it something possible ?
In Apache Beam state is always scoped per window (also see this answer). So I can only think of re-windowing into the global window and handle the state there. In this global StatefulFn you can store and handle the prior state(s).
It would then look like this:
events
.apply("SessionWindow", Window.
<KV<String, Event>>into(Sessions.withGapDuration(gapSession)
.triggering(<early and late data trigger>))
.apply("GroupByKey", GroupByKey.create())
.apply("Re-window into Global Window", Window.
<KV<String, Event>>into(new GlobalWindows())
.triggering(<early and late data trigger>))
.apply("ComputeState", ParDo.of(new StatefulFn()))
Please also note that as of now, Apache Beam doesn't support stateful processing for merging windows (see this issue). Therefore, your StatefulFn on a session window basis will not work properly when your triggers emit early or late results of session windows since the state is not merged. This is another reason to work with a non-merging window like the global window.
I am getting following exception in a single box cq5 author environment.
javax.jcr.InvalidItemStateException: Item cannot be saved
because node property has been modified externally
more exception details:
Caused by: javax.jcr.InvalidItemStateException: Unable to update a stale item: item.save()
at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:262)
at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
at org.apache.jackrabbit.core.session.SessionSaveOperation.perform(SessionSaveOperation.java:65)
at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
at org.apache.jackrabbit.core.SessionImpl.perform(SessionImpl.java:361)
at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:812)
at com.day.crx.core.CRXSessionImpl.save(CRXSessionImpl.java:142)
at org.apache.sling.jcr.resource.internal.helper.jcr.JcrResourceProvider.commit(JcrResourceProvider.java:511)
... 215 more
Caused by: org.apache.jackrabbit.core.state.StaleItemStateException: 3bec1cb7-9276-4bed-a24e-0f41bb3cf5b7/{}ssn has been modified externally
at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.begin(SharedItemStateManager.java:679)
at org.apache.jackrabbit.core.state.SharedItemStateManager.beginUpdate(SharedItemStateManager.java:1507)
at org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1537)
at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:400)
at org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:375)
at org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:275)
at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:258)
Here is the code sample:
adminResourceResolver = resourceResolverFactory
.getAdministrativeResourceResolver(null);
Resource fundPageResource = adminResourceResolver.getResource(page
.getPath() + "/jcr:content");
ModifiableValueMap homePageResourceProperties = fundPageResource
.adaptTo(ModifiableValueMap.class);
homePageResourceProperties.put("ssn",(person.getSsn());
adminResourceResolver.commit();
Any ideas ? It could be possible multiple threads accessing this code, as multiple authors on multiple pages calling this code from a authored component.
Thank you,
Sri
This is an error your see often in CQ5.5 (and lessens with each version upwards). The root cause of this issue is that multiple processes/services are modifying the same resource in roughly the same timespan (usually using different sessions, sometimes even with different users).
A small example to demonstrate perhaps. Session A and B both have a reference to Resource X. Session A modifies some properties on X, saves and commits, and is destroyed. This all goes smoothly. Session B still has a snapshot of the situation before A made modifications, session B makes modifications and all seems well UNTIL it tries to save. At this point, session B detects that it can't commit its changes because it doesn't have the latest node state. It has detected some other sessions made changes to the same node. In essence the current node state conflicts with modifications that session A has done and throws an ItemStale exception. The reason for this exception is the notion that the API doesn't know wether you want to keep the changes made by A, keep the changes made by the current session and discard the changes made by A, or merge them.
This error happens often with long running sessions and with workflow/listener combinations. Therefore the recommendation is to keep sessions as short as possible to prevent this kind of conflicts as much as possible.
One way to deal with this is to call session.refresh(keepChangesBoolean) before calling .save(). This instructs the current session to check for updates made by other sessions and deal with it according to the boolean flag you submit. This however is not a guarantee as it's still possible that between your refresh and your save call, yet another session has done the same. It only lowers the odds of this exception occurring.
Another way to deal with this is to retry again from scratch.
I'm a student learning to use MATLAB. For an assignment, I have to create a simple state machine and collect some results. I'm used to using Verilog/Modelsim, and I'd like to collect data only when the state machine's output changes, which is not necessarily every time/sample period.
Right now I have a model that looks like this:
RequestChart ----> ResponseChart ----> Unit Delay Block --> (Back to RequestChart)
| |
------------------------> Mux --> "To Workspace" Sink Block
I've tried setting the sink block to save as "Array" format, but it only saves 51 values. I've tried setting it to "Timeseries", but it saves tons of zero values.
Can someone give me some suggestions? Like I said, MATLAB is new to me, please let me know if I need to clarify my question or provide more information.
Edit: Here's a screen capture of my model:
Generally Simulink will output a sample at every integration step. If you want to only output data when a particular event occurs -- in this case only when some data changes -- then do the following,
run the output of the state machine into a Detect Change block (from the Logic and Bit Operations library)
run that signal into the trigger port of a Triggered Subsystem.
run the output of the state machine into the data port of the Triggered Subsystem.
inside the triggered subsystem, run the data signal into a To Workspace block.
Data will only be saved at time point that the trigger occurs, i.e. when your data changes.
In your Simulink window, make sure the Relative Tolerance is small so that you can generate many more points in between your start and ending time. Click on the Simulation option at the top of the window, then click on Model Configuration Parameters.
From there, change the Relative Tolerance to something small... like 1e-10. After that, try running your simulation again. You should have a lot more points in your output array that you can now save.
I've got this command line app that iterates through CSV files to create a Core Data SQLite store. At some point I'm building these SPStop objects, which has routes and schedules to-many relationships:
SPRoute *route = (SPRoute*)[self.idTransformer objectForOldID:routeID ofType:#"routes" onContext:self.coreDataController.managedObjectContext];
SPStop *stopObject = (SPStop*)[self.idTransformer objectForOldID:stopID ofType:#"stops" onContext:self.coreDataController.managedObjectContext];
[stopObject addSchedulesObject:scheduleObject];
[stopObject addRoutesObject:route];
[self.coreDataController saveMOC];
If I log my stopObject object (before or after saving; same result), I get the following:
latitude = "45.50909";
longitude = "-73.80914";
name = "Roxboro-Pierrefonds";
routes = (
"0x10480b1b0 <x-coredata://A7B68C47-3F73-4B7E-9971-2B2CC42DB56E/SPRoute/p2>"
);
schedules = (
"0x104833c60 <x-coredata:///SPSchedule/tB5BCE5DC-1B08-4D11-BCBB-82CD9AC42AFF131>"
);
Notice how the routes and schedules object URL formats differ? This must be for a reason, because further down the road when I use the sqlite store and print the same stopObject, my routes set is empty, but the schedules one isn't.
I realize this is very little debugging information but maybe the different URL formats rings a bell for someone? What could I be doing wrong that would cause this?
EDIT: it seems that one SPRoute object can only be assigned to one SPStop at once. I inserted breakpoints at the end of the iteration and had a look a the sqlite every time and I definitely see that as soon as an SPRoute object (that already had been assigned to a previous stop.routes) is assigned to a new SPStop, the previous stop.routes set gets emptied. How can this be?
Well, we had disabled Xcode's inverse relationship's warning which clearly states:
SPStop.routes does not have an inverse; this is an advanced
setting (no object can be in multiple destinations for a specific
relationship)
Which was precisely our issue. We had ditched inverse relationships because Apple states that they're only good for "data integrity". Our store is read-only so we figured we didn't really need them. We learn now that inverse relationships are a little more than just for "data integrity" :P