In GTK what is the difference between "signals" and "events"? - gtk

I am trying to get started with GTK, but I find the documentation for signals (https://developer.gnome.org/gobject/stable/signal.html) hard to understand.
It seems as there is a difference between a "signal" and an "event".
For example, the documentation for the "event"-signal for a Widget (https://developer.gnome.org/gtk3/stable/GtkWidget.html#GtkWidget-event) says
The GTK+ main loop will emit three signals for each GDK event delivered to a widget: one generic ::event signal, another, more specific, signal that matches the type of event delivered (e.g. “key-press-event”) and finally a generic “event-after” signal.
So it seems to me, that GDK uses "events", whereas GTK+ uses "signals". Maybe events are just packed into signals, or the other way around? Or are they completely different things?
My understanding of the above quote:
When a key is pressed, then a GDK-event is fired. This GDK-event calls a callback function of the widget (which is not for the programmer to interfer with). The callback function then in turn emits the three signals ::event, key-press-event and event-after, one after the other. As a programmer I can intercept these signals by writing callback functions. If the callback for the first ::event signal returns TRUE, then the second key-press-event signal is not fired, otherwise it is. The third event-after signal is always fired.
Is my understanding correct?
Furthermore, in the docs, sometimes signals are prepended by a double colon (::event) and sometimes they are not (key-press-event and event-after). What is the difference? What is the meaning of the double colon?

it's just nomenclature.
signals, in GObject, are just fancy ways to calling named lists of functions; each time an instance "emits" a signal, the GSignal machinery will look at all the callbacks connected to that particular signal, and call them sequentially until either one of these conditions is satisfied:
the list of callbacks is exhausted
the signal accumulator used when the signal is defined will stop the signal emission chain if a defined condition is met
all signals emitted by GDK or GTK+ (as well as any other GObject-based library) work exactly in that way.
events, in GDK, are structures related to windowing system events, like a button press, a key release, a pointer crossing the window boundaries, a change in the window hierarchy, and so on and so forth. the only interaction you generally have with GDK events happen in specific signals on the GtkWidget types. as a convention (though it does not always apply) the signals that have a GdkEvent structure have an -event suffix, like button-press-event, or key-release-event, or enter-notify-event, or window-state-event. again, those are GObject signals, and their only specialization is having a GdkEvent as an argument.
as for the double colon: the full specification of a signal is made of the type that declares it, e.g. GtkWidget, and the signal name, e.g. button-press-event, separated by a double colon, e.g. GtkWidget::button-press-event. the ::button-press-event notation is just a documentation shorthand, signifying that the speaker is referring to the button-press-event signal.

The simple way to understand it is that, events are something that you do to an object, say GtkButton (we choose button as something you can see). When you click a button, the button receive an event from you (actually it's from Gdk ( a thin layer between gtk and underlying window and graphics system ). Upon receive an event it has to do something. otherwise it's a dead object.
From there, something has to be done. Since an object has to do something, a signal will pick up the rest. Signal will emitted "from" the object to tell other object something has happened. Short word, signal is a catcher of an event.
The most used pre-defined signal for GtkButton is "clicked". Within the callback for the signal, you can do anything you want to be.
Now, another question, hey, why don't we just catch the event from the mouse button and do it from there? Of course you can. Here's how :
get the position of the button in the window
calculate the allocated width,height and position in memory, so when user emit event button press within the are, it will trigger something
make another function that when you resize, minimize, maximize the window, calculate again the position, width and height and keep it in memory. also every other widgets around it because their size is also change.
if you choose not to show the widget, calculate every widgets in a window because their position, width and height are totally different, store it in the memory.
if you move the window or window is hidden, don't do anything because the coordinate of the button is replaced by something else on top. you don't want to click the screen (to where the button was) and your application do something while other window is focused.
6.if you loose your mouse ?................damn
Next, Gdk uses signals too. For example GdkScreen emits 3 signals, which react from an event: somehow you turn off the compositing window, somehow you hookup with other screen and somehow you change the screen resolution.
Next, callbacks are not emitted signals. Signal "emits" callbacks. It is up to you if you to connect (intercept, in your term) or not. It is not your function, it's predefined function which you just wrap arounds it with your function name. After you use a signal, you can also disconnect it, for some reason.
Next, yes, if the widget signal "event" return True, the second specific signal is disconnected. Note: do not tamper with event mask of a widget, since a widget has its own default event masks.
Finally, double-colon? either its documenter like the double colon or just saying this signal belong to a class. Don't worry about it, you probably not going to use it in C

Related

Drawable presented late, causes steady state delay

I have a little Swift playground that uses a Metal compute kernel to draw into a texture each time the mouse moves. The compute kernel runs very fast, but for some reason, as I start dragging the mouse, some unknown delays build up in the system and eventually the result of each mouse move event is displayed as much as 4 frames after the event is received.
All my code is here: https://github.com/jtbandes/metalbrot-playground
I copied this code into a sample app and added some os_signposts around the mouse event handler so I could analyze it in Instruments. What I see is that the first mouse drag event completes its compute work quickly, but the "surface queued" event doesn't happen until more than a frame later. Then once the surface is queued, it doesn't actually get displayed at the next vsync, but the one after that.
The second mouse drag event's surface gets queued immediately after the compute finishes, but it's now stuck waiting for another vsync because the previous frame was late. After a few frames, the delay builds and later frames have to wait a long time for a drawable to be available before they can do any work. In the steady state, I see about 4 frames of delay between the event handler and when the drawable is finally presented.
What causes these initial delays and can I do something to reduce them?
Is there an easy way to prevent the delays from compounding, for example by telling the system to automatically drop frames?
I still don't know where the initial delay came from, but I found a solution to prevent the delays from compounding.
It turns out I was making an incorrect assumption about mouse events. Mouse events can be delivered more frequently than the screen updates — in my testing, often there is less than 8ms between mouse drag events and sometimes even less than 3ms, while the screen updates at ~16.67ms intervals. So the idea of rendering the scene on each mouse drag event is fundamentally flawed.
A simple way to work around this is to keep track of queued draws and simply don't begin drawing again if another drawable is still queued. For example, something like:
var queuedDraws = 0
// Mouse event handler:
if queuedDraws > 1 {
return // skip this frame
}
queuedDraws += 1
// While drawing
drawable.addPresentedHandler { _ in
queuedDraws -= 1
}

gtkmm3 drawings outside on_draw

I am working on the real-time plot application where a stream of data is to be plotted on screen. Earlier using gtkmm2 I had done this using a custom widget (derived from Gtk::Bin) where I have a member function which creates a cairo context and does the plotting.
Now with gtkmm3 I am unable to plot in any method other than on_draw. Here's what my custom draw method body looks like
Gtk::Allocation oAllocation = get_allocation();
Glib::RefPtr <Gdk::Window> refWindow = get_window();
Cairo::RefPtr <Cairo::Context> refContext =
refWindow->create_cairo_context();
refWindow->begin_paint_rect(oAllocation); //added later
refContext->save();
refContext->reset_clip();
refContext->set_source_rgba(1,
1,
1,
1);
refContext->move_to(oAllocation.get_x(),
oAllocation.get_y());
refContext->line_to(oAllocation.get_x()
+ oAllocation.get_width(),
oAllocation.get_y()
+ oAllocation.get_height());
refContext->stroke();
refContext->restore();
refWindow->end_paint();
Initially I derived the class from Gtk::DrawingArea then tried with Gtk::Bin while adding the begin_paint_rect call.
Is it forbidden to draw in any place other than on_draw?
For something like a plot (or anything that is rather complex to draw) I advise to use a buffer; I lost a month of my life because I read that gtkmm3 does buffering so that using "double buffering" isn't needed anymore (as opposed to gtkmm2), but it aint that simple (read: that isn't true).
So, what you should do is just draw to your own surface; and every time you change something call queue_draw_region or queue_draw_area.
Then in on_draw get the list of clip rectangles and copy those from your private surface to the cr that is passed to the on_draw function. Cairo normally does
the exact same thing (or so they claim), copying what you just copied again, to the screen; so you should turn that off (this should be possible I read).
The reason you can't use Cairo's buffering is because it doesn't KEEP that buffer; what you get is some corrupted surface, so you are forced to redraw EVERYTHING inside the clip rectangle list. That wouldn't be too bad if you (your application) was the only one making changes (as per your queue_draw_* calls): then you could set a flag, invalidate the part(s) that needs redrawing and simply postpone the draw until you get to on_draw. But sometimes on_draw is called for other reasons, for example, when you open a menu that goes over your drawing area. I think this is a bug (or a design error) but it is the way it is. The result is that you can't know what you have to redraw EXCEPT by looking at the clip rectangle list; which makes it incredibly hard to just draw a part of your area unless your drawing is made up of many separate rectangles (like, say, a chess board). The only feasible way is to keep a full copy of the image in memory (your private surface) and just copy the clip rectangle list from there when in on_draw.
Is it forbidden to draw in any place other than on_draw?
Basically: Yes.
The idea is that you call gtk_widget_queue_draw() or gtk_widget_queue_draw_area() when you want to cause a redraw.
https://developer.gnome.org/gtk3/stable/GtkWidget.html#gtk-widget-queue-draw
https://developer.gnome.org/gtk3/stable/GtkWidget.html#gtk-widget-queue-draw-area

pyglet: synchronise event with frame drawing

The default method of animation is to have an independent timer set to execute at the same frequency as the frame rate. This is not what I'm looking for because it provides no guarantee the event is actually executed at the right time. The lack of synchronisation with the actual frame drawing leads to occasional animation jitters. The obvious solution is to have a function run once for every frame, but I can't find a way to do that. on_draw() only runs when I press a key on the keyboard. Can I get on_draw() to run once per frame, synchronised with the frame drawing?
The way to do this is to use pyglet.clock.schedule(update), making sure vsync is enabled (which it is by default). This ensures on_draw() is run once per frame. The update function passed to pyglet.clock.schedule() doesn't even need to do anything. It can just be blank. on_draw() is now being executed once per frame draw, so there's no point in having separate functions both of which are being executed once per frame drawing. It would have been a lot nicer if there were just an option somewhere in the Window class saying on_draw() should be drawn once per second, but at least it's working now.
I'm still getting bad tearing, but that's a bug in pyglet on my platform or system. There wasn't supposed to be any tearing.

Setting up a power meter in cocos2d

I am a straight noob. Everyone else says it, but I'm dead serious.
My question is, what is the best way to make a power meter to move a object? Meaning, how to set it up so that the longer the player holds the more power they get. Also how, would I incorporate physics?
What I'd like to accomplish is to have a player holding onto something so that when he taps on the screen and hold he powers up, and when he lets go he throws the object a certain distance.
just checking if the there is any thouch sequence or not is rather an easy thing, you just have to overload two functions for your scene class, one to inform you whenever a touch sequence begins and one to tell you touch is ended. the source code example is describe in this link. after than i think you need a gauge to show how much power is gathered so far, the easiest way is to use a texture with full power shown in it and the set it as texture and then show it little by little as the power goes up just as the code below:
// to create the gauge with zero power
CCSprite *s=[CCSprite spriteWithTexture:[CCTextureCache addImage:#"gauge.png"] rect:CGRectMake(0,0,0,10)];
// and then whenever the power changes you call this method
[s setTextureRect:CGRectmake(0,0,power,10)]
note that in my code i am using a 100x10 texture (power is somthing between 0..100 and texture height is 10 as the last parameter in both CGRectMake functions)

How can I chain animations in iPhone-OS?

I want to do some sophisticated animations. But I only know how to animate a block of changes. Is there a way to chain animations, so that I could for example make changes A -> B -> C -> D -> E -> F (and so on), while the core-animation waits until each animation has finished and then proceeds to the next one?
In particular, I want to do this: I have an UIImageView that shows a cube.
Phase 1) Cube is flat on the floor
Phase 2) Cube rotates slightly to left, while the rotation origin is in bottom left.
Phase 3) Cube rotates slightly to right, while the rotation origin is in the bottom right.
These phases repeat 10 times, and stop at Phase 1). Also, the wiggling becomes lesser with time.
I know how to animate ONE change in a block, but how could I do such a repeating-thing with some sophisticated code in between? It's not the same thing over time. It changes, since the wiggling becomes lesser and lesser until it stops.
Assuming you're using UIView animation...
You can provide an animation 'stopped' method that gets called when an animation is actually finished (check out setAnimationDelegate). So you can have an array of animations queued up and ready to go. Set the delegate then kick off the first animation. When it ends it calls the setAnimationDidStopSelector method (making sure to check the finished flag to make sure it's really done and not just paused). You can also pass along an animation context which could be the index in your queue of animations. It can also be a data structure that contains counters so you can adjust the animation values through each cycle.
So then it's just a matter of waiting for one to be done then kicking off the next one taken off the queue.
The same method applies if you're using other types of animation. You just need to set up your data structures so it knows what to do after each one ends.
You'll need to setup an animation delegate inside of your animation blocks. See setAnimationDelegate in the UIView documentation for all the details.
In essence your delegate can be notified whenever an animation ends. Use the context parameter to determine what step in your animation is currently ending (Phase 1, Phase 2, etc)
This strategy should allow you to chain together as many animation blocks as you want.
I think the answer to your question is that you need to get more specification on what you want to do. You clearly have an idea in mind, but more specification of details will help you answer your own question.