How to discard only the last element of CKEditor undo stack? - plugins

Is there a way to remove only the last snapshot in the CKEditor undo stack or can i replace it with another.Should i implement it on my own?
Example:
Step 1
Step 2 --should be removed and replaced with step 3 (On given situation)
Step 3 -- should become step 2
This feature should be available only if special event occurs.

If your undo snapshots are a result of user actions, following this way:
Step 1.
Step 2.
CKEDITOR.instances.editor.fire( 'lockSnapshot' )
Step 3.
CKEDITOR.instances.editor.fire( 'unlockSnapshot' )
Of course, you have to detect what's going on and fire the right event at the right time.
If changes to the content are done from code, editor#updateSnapshot event would even be better:
function() {
editor.fire( 'saveSnapshot' );
editor.document.body.append(...);
// Makes new changes following the last undo snapshot a part of it.
editor.fire( 'updateSnapshot' );
..
}

Related

How to call a step (doing revert) if any previous step fails?

I have the Argo WorkflowTemplate which has n steps.
I want to call last step only if any of the previous step fails.
Example: In a 5 steps templates, if 2nd fails, skip 3 and 4 and only call 5 since its a revert step. If all are passed, don't call 5th because there is no need to revert.
You can define a workflow exit handler to run after all the other steps.
By adding a when condition, you can make sure the exit handler runs if and only if one of the previous steps failed.

Avoiding repetitive calls when creating reactfire hooks

When initializing a component using reactfire, each time I add a reactfire hook (e.g. useFirestoreDocData), it triggers a re-render and therefore repeats all previous initialization. For example:
const MyComponent = props => {
console.log(1);
const firestore = useFirestore();
console.log(2);
const ref = firestore.doc('count/counter');
console.log(3);
const { value } = useFirestoreDocDataOnce(ref);
console.log(4);
...
return <span>{value}</span>;
};
will output:
1
1
2
3
1
2
3
4
This seems wasteful, is there a way to avoid this?
This is particularly problematic when I need the result of one reactfire hook to create another (e.g. retrieve data from one document to determine which other document to read) and it duplicates the server calls.
See React's documentation of Suspense.
Particulary that part: Approach 3: Render-as-You-Fetch (using Suspense)
Reactfire uses this mechanics. It is not supposed to fetch more than one time for each call even if the line is executed more than once. The mechanics behind "understand" that the fetch is already done and will start the next one.
In your case, react try to render your component, see it needs to fetch, stop rendering and show suspense's fallback while fetching. When fetch is done it retry to render your component and as the fetch is completed it will render completely.
You can confirm in your network tab that each calls is done only once.
I hope I'm clear, please don't hesitate to ask for more details if i'm not.

SFC Steps in IEC 61131-3 Programming

So I have a problem where in my SFC it jumps to an inital step but the commands written in the step would not register.
At the end of the SFC a step inputs 5 into A_Status(INT).
The very next transition checks if the value of A_Status is 5.
No problems so far, but after the transition when it jumps to the start of the SFC,
where the first step is supposed to input 0 into A_Status, A_Status stays at 5.
The cycle time of my program is 100ms. I have tried slowing the cycle but it didn't work.
What seems to be the problem here? Maybe the same variable used in such a sequence just doesn't work?
Reply would be greatly appreciated.
You don't mention if you write the values during Entry/Exit or in the SFC step actions. But beware, that on some occasions code from a previous step can be executed later than code in the new step.
Here is a link that explains the call order and why sometimes parts of the code is executed twice:
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/45035999420423563.html
I've had succes with adding the following code in all the actions to prevent this from happening.
IF STEP_NAME.x THEN // Only execute this while the step is active.
// Insert code here.
END_IF

Apache Flink, get last event in the window

I'm working on a project where i have a window with a size of 4 days, with a step of 1 day
.timewindow(Time.days(4), Time.days(1))
and i have also a trigger
.trigger(new myTrigger)
onEventTime ---> Continue
onProccessingTime ---> Continue
clear ---> Purge
onElement---> (if element.isFinalTransaction) TriggerResult.FIRE_AND_PRUGE
isFinalTransaction is a boolean, when true it call FAP.
the mean question is how can i make it return true/false depending on if the element is the last in the window or not
is there any method that can tell us if the current element is the last one in the window?
is there any method that can tell us if the current window is done (before sliding) or not ?
From the abstract trigger class (https://github.com/apache/flink/blob/master//flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/Trigger.java)
The short answer is no. The method onElement is called for every element that gets added to the pane. When an element gets added it's impossible to know if it is the last element, because that information is not known until the next element comes (and we see if it was in this window or the next one).
However, one alternative would be to check if the element is sufficiently close to the end of the end of the window (because onElement has access to window e.g. if (timestamp > window.getEnd - delta) ...
However, I can not think of a use case in which I would recommend this. If you need access to the last element in the window, you should probably just use a WindowFunction and in the apply method get the last element of the input iterable (input.last).

flink streaming window trigger

I have flink stream and I am calucating few things on some time window say 30 seconds.
here what happens it is giving me result my aggregating previous windows as well.
say for first 30 seconds I get result 10.
next thiry seconds I want fresh result, instead I get last window result + new
and so on.
so my question is how I get fresh result for each window.
You need to use a purging trigger. What you want is FIRE_AND_PURGE (emit and remove window content), what the default flink trigger does is FIRE (emit and keep window content).
input
.keyBy(...)
.timeWindow(Time.seconds(30))
// The important part: Replace the default non-purging ProcessingTimeTrigger
.trigger(new PurgingTrigger[..., TimeWindow](ProcessingTimeTrigger))
.reduce(...)
For a more in depth explanation have a look into Triggers and FIRE vs FIRE_AND_PURGE.
A Trigger determines when a window (as formed by the window assigner) is ready to be processed by the window function. Each WindowAssigner comes with a default Trigger. If the default trigger does not fit your needs, you can specify a custom trigger using trigger(...).
When a trigger fires, it can either FIRE or FIRE_AND_PURGE. While FIRE keeps the contents of the window, FIRE_AND_PURGE removes its content. By default, the pre-implemented triggers simply FIRE without purging the window state.
The functionality you describe can be found in Tumbling Windows: https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/windows.html#tumbling-windows
A bit more detail and/or code would help :)
I'm little late into this question but I encountered the same issue with OP's. What I found out later was a bug in my own code. FYI my mistake could be good reference for your problem.
// Old code (modified to be an example):
val tenSecondGrouping: DataStream[MyCustomGrouping] = userIdsStream
.keyBy(_.somePartitionedKey)
.window(TumblingProcessingTimeWindows.of(Time.of(10, TimeUnit.SECONDS)))
.trigger(ProcessingTimeTrigger.create())
.aggregate(new MyCustomAggregateFunc(new MyCustomGrouping()))
Bug happened at new MyCustomGrouping: I unintentionally created a singleton MyCustomGrouping object and reusing it in MyCustomAggregateFunc. As more tumbling windows created, the later aggregation results grow crazy! The fix was to create new MyCustomGrouping each time MyCustomAggregateFunc is triggered. So:
// New code, problem solved
...
.aggregate(new MyCustomAggregateFunc(() => new MyCustomGrouping()))
// passing in a func to create new object per trigger