I have the .NET library which communicates with our camera and I am writing a LabVIEW VI to control our camera through the .NET library. From time to time the camera's state in the library is updated depending on how the camera is used (idle, live, none, and etc) and I want LabVIEW to pick up the change.
Looking at the following example, using a callback seems the right way to do, but I am not sure:
how to pass the updated value from the callback vi to the main vi
how to inform main.vi that callback was invoked by the .NET library
As an example I want to pass an integer value from the callback to main vi but I can't figure it out.
Could you please help me ?
Thanks!
Note that I manually update xValue from the main.vi to trigger a callback. What I want to do is that once the callback is called by .NET I want to print the updated xValue from the callback to the indicator shown in main.vi
I applied Yair's suggestion but when I'm dequeueing I am not getting "invoked" state even though callback is called.
Drop the register callback node.
Create a user event in the main VI. Make the data (the xValue integer, in your case) a typedef if there is any chance at all you will modify it. Register for the user event in the main VI using a Register for Events node and handle it using an event structure.
Wire the user event into the User Parameter input of the register node.
If you now create the callback VI, you will have the event reference as the user parameter and you can generate the user event inside the callback VI using the Generate User Event primitive.
Now, each time the callback VI runs, it will generate the event and the main VI will have it in its event queue.
Related
A button control can trigger event of a datawindow control with TriggerEvent() function.
The button control in my code was set as child object of DataWindow control with SetParent win32 API function. SetParent external function moves button from window to datawindow control but after SetParent the code that was already written for Clicked event is not working anymore. That is why i need to redirect the clicked event of button to buttonclicked event of datawindow.
There is good example of redirecting event by using win32 API calls. here is the link http://bitmatic.com/c/redirecting-mousewheel-events-to-another-control i need to do the same thing in PowerBuilder.
Can someone see that code or help me to redirect events the way i want?
You're doing things the hard way. Find the name of the datawindow control (e.g. dw_1), and from the command button just issue dw_1.event buttonclicked ( args ).
Better yet, move the code to a function in the parent object. Controls are navigation objects, they really shouldn't have too much code in them (IMHO), but fire off methods on the parent object.
I have a rather large Matlab program that is GUI based. I am looking into creating automated tests for it, as the current way of checking for bugs before a release is simply using all its functionality like a user would.
I would rather not use a GUI testing program that just records clicks and what not, so I was thinking of adding testing code that would call the button callbacks directly. The problem that I have run into with this is that we have a lot of warndlg and msgbox popups, and I would like my tester code to be able to see these.
Is there any way for Matlab code to tell if a function it called created a warndlg or msgbox? If so, is there any way to click 'ok' on these popups?
In a similar vein, is it possible to handle popups that block code execution (using uiwait or an inputdlg)?
If it matters I didn't use GUIDE, and all the GUI elements are created programmatically
Two ways. The first one is more elegant
Let the functions return an extra variable and return the status of the function. For example, 1: success, 2: success with warning, 3: error...
Create some global variables and make the function change them if a warndlg or msbgbox shows up. The main window would then check if the status of the global variable.
You can tell if a warning dialog was created by looking for it's tag using the findobj function. A warning dialog created using warndlg will have the tag "Msgbox_Warning Dialog". So code like this would tell you if the warning dialog exists:
set(0,'ShowHiddenHandles', 'on')
h = findobj('Tag', 'Msgbox_Warning Dialog');
warn_exists = ~isempty(h)
set(0,'ShowHiddenHandles', 'off')
to close the warning dialog, you can call delete, like this:
delete(h)
For the message box, I would store the handle when you create a message box, then look at the children to find the buttons, then look at their callbacks. You should be able to call the callbacks to simulate picking a button.
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.
I have two guis which are exact copies of each other.
However only some of the functionality is used in each gui. I basically saved a monolithic gui in GUIDE under two different names.
I am dividing up the monolith into subguis, each with the same fig file but saved in GUIDE with different names.
SubguiA and subguiB are launched from two buttons on a parent gui. In each subgui there is a usercontrol(a panel) which has 'UserData' set to 3005.I run subguiA from button 1. I run subguiB from button2 and step in and ask for hpanel = findobj('UserData',3005) from within the CreateFcn of one of the textboxes on the subguiB. I get back hpanel as 2x1 double because it finds two such panels in memory. I get that.
So then when I go to set the userdata of the textbox using hpanel as 'Parent', the app crashes because hpanel is supposed to be 1x1. I thought I would use the handle of the subguiB in findobj so that is specifically restricts findobj to subguiB. However when the CreateFcn of the textbox on subguiB is being run, it does not yet have the hObject of the entire subguiB. The hObject of the entire subguiB is available from the OpeningFcn of the subguiB, which runs only after the CreateFcns of all the usercontrols on it have executed.
So the question is: how do I restrict findObj to finding the object only in subguiB( which is currently being created?).
thanks
try another function:
findall(handle_list,'property','value',...)
here you can use a handle as the parent you want to search for objects with properties... Still you have to make sure to get the right object. Probably giving an unique name would be helpful!
You could try another method of sharing resources so that you do not have this issue. In the Mathworks file exchange site there is an object oriented class called a Singleton (http://www.mathworks.com/matlabcentral/fileexchange/24911-design-pattern-singleton-creational), which you can use to build a custom subclass to allow the exchange of important information and abstract the GUI interface details.
The point of a singleton is that you are guaranteed that there is only one in any program, thus you can store state information in that object and be able to access it from anywhere. No searching required.
When each GUI calls its CreateFcn it acquires the instance handle for the singleton subclass you created, and sets the GUI[A,B] window handle attribute so that the other GUI will have direct access to it via that same singleton. You can then build a messaging system to exchange or copy values across GUI's or orchestrate advanced coordination capabilities into your overall App. This is a great paradigm for any functionality where different parts of your App need to communicate such as allowing external Matlab scripts to interact with your GUI for batch type processing. Example, one GUI button callback can invoke a method in the singleton object to cause the second GUI to pop up and display and then populate that GUI with all the latest data context from the first GUI just entered, without that first GUI even knowing anything about the internals of the second GUI. If a GUI's controls change only the singleton need know about the internals of those changes.
I use EFL library to develop applications for Tizen platform. I need to implement event handler for hardware button "Back".
In native Tizen API it is done pretty simple. But I have no idea how can I do this in EFL library.
I tried to do it using the following way:
evas_object_event_callback_add( obj, EVAS_CALLBACK_KEY_DOWN, on_key_down, NULL );
But it doesn't work.
Could anyone help me?
Instead of EVAS_CALLBACK_KEY_DOWNandevas_object_event_callback_add()`,
use ea_object_event_callback_add
use EA_CALLBACK_BACK for back button
and EA_CALLBACK_MORE for menu button
but you need to include one header file, unfortunately I forgot the header file name
some thing efl-util.h or something, you can make a search in header files
AFAIK the thing is that EFL uses queues for processing events. That means that callbacks are called one by one: the first should return PASS_ON (or something) for the next callback for the same event to be run.
So, there may be another callback that does not allow the dispatch of the event.
Try
Ecore_Event_Handler *handler;
handler = ecore_event_handler_add(ECORE_EVENT_KEY_DOWN, hardware_key_down_cb, NULL);
In hardware_key_down_cb() callback function, check for key name 'XF86Stop' Ecore_Event_Key to handle back key event.
use eext_object_event_callback_add(Evas_Object * obj,eext_callback_type type,callback_fun,NULL)
and in call back function you can write your requirement