I'd like to know which signal is emitted when a Gtk.Window is fully shown, with fully shown I mean the window itself is shown and its widgets too.
I tried several signals:
show
realize
visibility-notify-event
set_focus
but none of them works properly.
The only interesting answer I found on the web is this.
Connect a callback after the GtkWidget::draw signal (previously called expose in GTK+2).
Addendum
There is other stuff that comes into play: double buffering, client-side windows and (why not?) the fact that a widget can defer its drawing in an idle callback.
If you want to know when your main window appears the first time, it is far easier (and saner) add a g_idle_add after your show_all call.
It should be:
window.get_property("visible")
#Returns true if the window is visible
Related
I'm trying create modal window (wxFrame) in perl with xwPerl library (and WxWidget 3.0.2). Reason is, that is important for me to code will interrupt after window will be showed, until user close it. I found https://stackoverflow.com/a/2573660/5746693.
I would like use this code in in WxPerl library, but I have problem with using Wx::EventLoop class. It seems there is problem with loading this library. I didn't even find this class in WxWidget documentation. Sorry for probably stupid question.
Or is here some possibility to implement custom (with own controls in there) modal frame based on WxDialog class?
Thank for reply
If you need a modal window, use a wxDialog, not a wxFrame. Using the latter just doesn't make sense, the main difference between the two is that a dialog can be (and usually is) modal while the frame can't. Otherwise they are almost exactly the same.
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.
I understand from documentation and several related StackOverflow posts that window.parent, if there is no other parent, will self-reference and thus never be undefined.
I can't seem to find a decent reason as to why this is. JavaScript does have its idiosyncrasies, but this one just seems odd.
MSDN simply states that
If the current window doesn’t have a parent, i.e. it occupies the whole browser window, Parent returns the current window’s Window object.
MDN states
If a window does not have a parent, its parent property is a reference to itself.
And the W3 standard itself
The value of the parent attribute of a Window object MUST be the parent document's Window object or the document's Window object if there is no parent document
I've not seen other languages acting like this, what reason is there for this self-referencing design? Wouldn't 'null' or 'undefined' make for a more obvious situation when you hit the topmost element in a window?
So, why?
When working with iframes, developers often automate processes which navigate through windows. While the algorithms at their core will consist of the same basic logic, the conceptual approaches will differ.
Instead of working in a parent-children manner, sometimes the developer will craft the system in such a way that it will seem not to look for the parent, but simply for the right window to use. The one that controls (not necessarily holds) the area where the code is currently running.
In the case of such approaches, it would be conceptually weird for the program to return "false" or "undefined" when asking it a refference to the "right" window, because there must be one.
For instance, Bob is programming:
Bob: I embedded an iframe! Alright, let me just play around with the window that contains my entire iframe (not the window of the iframe itself)
Bob: What? Null? But I don't get it, my iframe is up & running, how can there not be any window which controls it?
I'm just saying that window.parent may not be meant to literally and strictly get the parent from the DOM (like .parentElement does), but more like to point to the window which absolutely wraps not only your script, but also everything else that wraps it at lower levels.
In the case of the topmost window (where your script is being executed), that statement may return the same window because, not having any oher window more important than it, it simply becomes 'the right one' to use when looking for the superior container.
I hope I make some sense.
I would say that this helps with window communication. When loading third party content, it might leverage window.parent.postMessage as it's form of communicating with it's implementation context, but it might be implemented with no parent window. An html page loading content in an iframe would have its own window as the iframe windows parent, but content loaded into something like a browser plugin such as an electron webview would have no parent window so the postmessage would fail and the implementing context would not be able to listen for that event. So basically it just allows for a safety net to allow devs to always be able to use window.parent because they might not know if their code will be running from window.top or not.
I assume this is just unfortunate naming. That property could have been better named something like 'parentOrCurrentWindow'.
If what you want is 'parent or current window' then being able to access that as just 'parent' makes your code a little shorter. And if you know that is so then it does not matter much. You could say it is better to get hold of SOME window than null.
But note this has nothing to do with JavaScript the language. This is about the DOM-model implemented by browsers. The DOM model could be improved to include two properties 'parentOrCurrent' and 'parentOrNull'. And in fact you could assign those variables in your own code to make it clear which one you are talking about.
I'm developing an application that periodically draws images on a GTK Drawing Area inside a window.
The rendering first works well and the window content gets repainted if I drag another window over the drawing one, but after some random amount of time (some seconds), the window stops updating itself.
New images dont get displayed, and if I then drag another window over the rendering one I get this:
When I click one of the checkboxes below my drawing area, the window gets refreshed and the problem is gone for another few seconds.
Any idea what could make the GTK threads stop updating the window content?
I dont know which part of my code is of interest to answer that question, so I pasted the mostly full version here.
My GTK-main() is called like this:
void window_main()
{
pthread_create(&drawing_thread, NULL, img_draw, NULL);
gtk_main();
gdk_threads_leave();
}
Thanks for any hints! :)
Found the solution: in the original example code I used (here) they use a g_timeout_add() to register their periodic drawing function.
The g_timeout_add()-registered function is run by gtk_main(), which means it is protected internally by gdk_threads_enter() and gdk_threads_leave(). That's the point I was not aware of.
Surrounded my call to gtk_widget_queue_draw_area() with these two functions and the bug is gone 8)
In our project, we're using gtkmm and we have several classes that extend Gtk::Window in order to display our graphical interface.
I now found out what call produces the behaviour (described in the previous revision. The question now slightly changed.)
We're displaying one window, works like a charm.
Then, we have a window which displays various status messages. Let's call it MessageWindow. It has a method setMessage(Glib::ustring msg) which simply calls a label's set_text().
After some processing, we hide this window again and we now show a toolbar. Just yet another simple window, nothing crazy.
For all windows applies: The main thread calls show() on the window and creates a new thread which calls Gtk::Main::run() (without argument).
That's how it should be, until now.
The problem starts here: The main thread now wants to call MessageWindow::setMessage("any string"). a) if I call this method, the message window reacts completely correctly. But afterwards, the toolbar-window is displayed empty. b) if I don't call it, the message window doesn't change the label (which is absolutely clear), and the toolbar window is displayed as it should.
Seems like the windows are messing up each other.
Now the question:
If my gui-thread is blocking in Gtk::Main::run(), how can I now change the text of a label?
We're using gtkmm-2.4 (and no, we cannot upgrade)
Any help is appreciated.
Wow! That's complicated...
First: you should not manipulate windows from several threads. That is you should have just one GUI thread that does all the GUI work, and let the other threads communicate with it.
It is theoretically possible to make it work (in Linux; in Windows it is impossible) but it is more trouble than it is worth.
Second: the line Gtk::Main main(argc, argv) is not a call, it is an object declaration. The object main should live for the duration of the program, so if you use it in a object constructor, as soon as you return from it, the object will be destroyed! Just put it at the top of the main function and forget about it.
UPDATE: My usual approach here is to create a pipe, a g_io_channel to read, and write bytes on the other end.
Other option, although I didn't test it is to call get the GMainContext of the main thread and then g_idle_source_new() and attach that source to the main context with g_source_attach(). If you try this one and it works, please post your result here!