I am using Ada together with the Gtk library.
I would like to read the user's keyboard input and react individually on it, depending which keys he/she pressed. How can I access the keyboard input from the user?
I'm not sure what you're looking for: 1) keystrokes or 2) editable text.
The game LinXtris handles main window key_press_event signals in the procedure On_Main_Window_Key_Pressed, which passes each Gdk.Event.Gdk_Event_Key on to the Game_Engine.
The Interaction demo cited here has a Gtk.Editable that handles Signal_Insert_Text in the procedure On_Insert_Text. The advantage is that the handler is called for single keystrokes, as well as pasted text.
Related
I tried to make my program trigger some actions by pressing "inlined" text instead of using Button (no particular reason - just improving skills). Example:
"You can edit or set <blue>default values</blue> for all fields." ("default values" clicked runs appropriate code)
I achieved it by creating Attributed Text with embedded "deeplink", registering the app to handle appropriate URL and running appropriate actions from onOpenURL callback received from system. It works but seems sort of "tricky" and "dirty".
Any idea how to achieve in simplier way it using SwiftUI (or UIKit)?
I want to use an editor to display a log from a program, I just need a very basic text field:
With a vertical scrollbar
With a contextual menu for copy/paste
Prevent the user from changing the text
In order to activate the copy/paste menu, I use the class racket:text% from framework rather than the basic one.
How to prevent the user from changing the text?
I read the documentation, as far as I understand the closest thing I found is lock method:
https://docs.racket-lang.org/gui/editor___.html?q=lock#%28meth._%28%28%28lib._mred%2Fmain..rkt%29._editor~3c~25~3e%29._lock%29%29
But it is not convenient, as it also prevent my program to write the data.
I also find get-read-write? but cannot find set-read-write.
Use the lock method, and just unlock the editor around any modifications that you want to do. You may find it useful to write a call-with-unlock helper function or with-unlock macro.
If you do your updates from the eventspace's handler thread (and you probably should; use queue-callback if they originate from another thread), then as long as you re-lock the editor at the end of an update, the user will never be able to interact with the unlocked editor.
I'm trying to do the following. I have a TextField (or any other control) and I want to determine focus loss according to user’s input validation.
I’ve read this article https://docs.oracle.com/javase/tutorial/uiswing/misc/focus.html#inputVerification but it seems that JavaFX does not handle focus as Swing does.
What I’m trying to achieve is: “A component's input verifier is consulted whenever the component is about to lose the focus. If the component's value is not acceptable, the input verifier can take appropriate action, such as refusing to yield the focus on the component or replacing the user's input with the last valid value and then allowing the focus to transfer to the next component.”
When a user is focused on a textfield (or any other control) I want to validate user’s input in 3 scenarios:
1) Enter key was pressed (I would listen to the KeyEvent, validate input and, if appropriate, ask to focus on the next control, but I don’t know how to do the latter).
2) TAB key was pressed (I need to intercept the focus change event).
3)Focus is lost (for example by clicking on another control or outside the Stage or even by pressing TAB key)
I need to validate user’s input and decide whether I let focus loss or no. In a way, I need to intercept the focus change event.
I can’t simply listen to de textField.focusedProperty because that only tells me that I’m loosing focus, but I can’t (or at least I don´t know how) stop it from happening.
I tried to get information about focus subsystem in JavaFX but couldn’t find any.
I’d like to know when the engine handles focus events and act according to:
a) The control that is loosing focus (and its content)
b) The possible next control in the focus sequence.
c) If the focus remains in the same Stage o if its send to another Stage or application.
I hope I’ve been clear enough with my explanation and please forgive my English if there are any mistakes.
Thank you very much in advance.
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.
I am developing a gnome shell extension for Gnome 3.4. My extension needs to capture the window events if any editable text is focused in/out.
global.stage.connect('notify::focus-key', Lang.bind(this, this._myHandler));
did not work for me.
Here is a simple use-case: whenever user clicks on firefox search box, I want my handler to be run.
Thanks for any help,
Selcuk pointed me this question, so in order to have this answered here for future search.
The library that would allow to set a global-desktop listener to focus changes is libatspi (the client-side library of GNOME accessibility framework). You could use directly C, pyatspi2 (python manual bindings) or gobject-introspection based bindings (ie, javascript). So a small javascript program that prints name:role_name of the focused object each time the focus change would be:
const Atspi = imports.gi.Atspi;
function onChanged (event) {
log(event.source.get_name() + ',' + event.source.get_role_name());
}
Atspi.init();
let atspiListener = Atspi.EventListener.new(onChanged);
atspiListener.register("object:state-changed:focused");
Atspi.event_main();
In any case, for code examples, you could take a look to recently added focus/caret tracking feature on gnome-shell magnifier (small-size example using javascript) or Orca (GNOME screen reader, big-size example, uses pyatspi2).
libatspi reference here: https://developer.gnome.org/libatspi/
gnome-shell magnifier code here: https://git.gnome.org/browse/gnome-shell/tree/js/ui/magnifier.js
you cannot do this.
application text entry widgets do not fall under the scope of the window manager, so you cannot access their contents, or whether or not they received focus.