Is there any way of allowing widget state update messages from javascript to python side through during Jupyter cell execution?
We're having a DOMWidget dipslaying a webpage with an API for communication over Http messages. So we manage to send commands and get back the response to the javascript side of the widget. The problem is that the widget state update isn't received on the python side until after the Jupyter cell has finished it's execution. I guess this is because the kernel is busy with the cell execution and receive functions is put on que.
The workaround for now is that the response is written to a synced traitlet from the javascript side which the user can access in a subsequent cell. We also display a output widget with text linked to a another traitlet to give information when the response is received. This is a bit messy and I would like to have more control over the execution to wait with subsequent cell executions until the response is received or timeout.
No, there is no direct way to make Jupyter code cells wait for an event or value sent from a widget. The problem is that the (Python) kernel only sends one-way messages to the (Javascript) widget and there is no way to make the kernel wait for a response from the Javascript side.
Python code can intercept callbacks invoked by Javascript in various ways but the results of those callbacks will not show up in cell output in the usual way. They must be stored in the Kernel state or displayed in a widget context -- see the Output standard widget, for example.
https://ipywidgets.readthedocs.io/en/stable/examples/Output%20Widget.html
You can use the output widget to display results sent from the Javascript implementation of a widget, but the output shows up in the widget context, not as the usual cell output.
For example the following strategy won't work if the user does a "run all"
Cell 1 creates a widget which interacts with the user and calls back to
Python to create a file "./xxx.dat" in the filesystem.
Cell 2 enters an infinite polling loop waiting for "./xxx.dat" to appear
before proceeding.
If the user interactively runs the cells one at a time this might work.
But in the present implementation of Jupyter widgets if the user does "run all"
the infinite loop in "Cell 2" must start and complete before the widget in "Cell 1" is
created and since the loop never completes the widget will never get created
and the file will never appear and the notebook will be frozen.
Also please see the jp_proxy_widget tutorial for more discussion of
the asynchronous nature of Jupyter widgets.
https://github.com/AaronWatters/jp_proxy_widget/blob/master/notebooks/Tutorial.ipynb
Related
I'm creating a SwiftUI multiplatform app in XCode and I have a section of my code that hangs. I want to update the user so they know what's happening and are willing to wait. I originally planned to have an alert with text that changed and then planned to have a Text element that updated. In both cases this isn't shown until after the code executes and as such only shows the final message (normally the success done message unless an error made it end sooner).
Is there anyway I can have a visible message to the user through an alert or SwiftUI element that is updated right away and thus will be helpful?
The fact that the alert isn't even shown until after the code executes is bad and incorrect. This suggests that you are doing something lengthy on the main thread, and that is an absolute no-no. You are freezing the interface and you risk the WatchDog process crashing your app before the user's very eyes. If something takes time, do it in the background.
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.
I'm automating an app that shows some overlay messages anywhere on the app for several scenarios, such as app installed for the first time etc. (I'm fairly new to Robotium too.)
The overlay displays a text that goes away by swiping or clicking on it. Also, there are different types of these overlays with different unique text on it. (let's call it Activity A)
I wanted to create a robust test case that handles this case gracefully. From the test's perspective we won't know that the activity A will be present all the time. But I want to recover from the scenario if it does, by writing a method that I can call any time. Currently, the tearDown method gets called since my expected activity name doesn't match.
Also, even if the activity A exists, there are other predefined overlay texts too. So, if I use solo.waitForText("abc") to check for text "abc", I may see the overlay 2 with the text "pqr" instead.
So I was looking for a way to automate this, and I can't use solo.assertCurrentActivity() or solo.waitForActivity methods as they just stop the execution after the first failure.
So any guidance is appreciated!
All the waitFor methods return a boolean. So you can use waitForActivity() exactly as you want to. If the Activity doesn't exist it will return false.
You can check which Activity is current:
Activity current = solo.getCurrentActivity();
I'm new to GWT. I creating a MVP based project (as described here) that uses a number of custom events. There are several widgets (10+) that listen for some global events and perform some action (including writing to the DOM) in the event handlers.
What I'm finding is that the UI blocks and doesn't update until each and every one of the handlers for the one event finishes processing. This is causing the UI to perform slowly on page load and for any other events that cause the widget to update.
I created a similar project in plain JavaScript/jQuery and this was not an issue with that project. In fact, the UI was blazing fast. What am I doing wrong here? The documentation states that GWT is very performant, so I have to conclude that I'm just doing it wrong.
One example, I have a drop down that selects a date preset (like Yesterday, or Last Week). When this happens I set the selected preset in the model like so:
public void setDateRange(DatePreset dateRange) {
this.dateRange = dateRange;
eventBus.fireEvent(new DateChangedEvent(dateRange));
}
Each of the widgets has access to the same eventbus and registers to handler DateChanged events. Each of the widgets needs to do a fair amount of logic and processing (including making an ajax call) to then update itself with correct data.
#Override
public void onDateChanged(DateChangedEvent event) {
DatePreset dateRange = event.getDate();
… additional processing and logic
… ajax call
}
I've determined after some basic profiling that each widget requires about 100-150ms to finish processing (which means there's a UI delay of over one to two seconds). When I say blocking, I mean the dropdown where I selected the date preset doesn't even close (and I see the spinny wheel) until everything finishes.
What can I do to make this run faster (and without UI blocking)? I am open to any ideas. Thanks!
Measuring the speed of the project in developer mode can be a reason for this extreme slowness.
You can check out the real speed of the application if you deploy it to an appserver or if you delete the &gwt.codesvr=127.0.0.1:9997 part from the end of the url in devmode.
I am doing a GWT application and speed tracer says that the painting process take a long time, so reading the pdf of the : Google 2010 - IO session ("Architecting for performance with GWT"), this sentence appear :
When should I use widgets?
When a component must receive events AND
There's no way to catch events in the parent widget
I agree with the first condition (I want to use widgets because my component, such as textBox or images must receive events, such as MouseOver, MouseClick...) but my question concern the second condition. Indeed I do not understand in which case there should be no way to catch event in the parent widget since it is ("always") possible to access to any element/component manipulating the DOM with Javascript. Here I am supposing that with Javascript I can access to the Widgets (identified with ui:field for example in ui:binder) element and the DOM elements (identified with id="").
So could you tell me why I am wrong or give me an example when "There's no way to catch events in the parent widget" ?
Thanks you,
It's more about "no easy way to put code that would catch events in the parent widget". It's all about componentization: you don't want to put event handling code outside your component, and you don't want to make your event handling code attach to elements outside your component. So components still are widgets, but inside them try to use HTML and event bubbling as much as possible.
In practice, that means using HTMLPanel (or RenderablePanel for better perfs, if you use 2.5.0 RC1 and you're a bit adventurous) inside composites, and otherwise using CellWidget (with UiRenderer to make it way easy to handle events bubbling from specific child elements)