I am working on a large (>30k lines) event-driven app. I have a sequence of inputs that produces a bug. What I want to do is to break as soon as the final input enters my code.
Is there a general way to do that?
I understand that for any specific sequence of inputs, I can find out where that last input is going to enter my code, then set a breakpoint there. What I would like to do is take out the step of "find out where that last input enters my code." In other words, I am running the app in the simulator, and I want to set a flag somewhere that says "break the next time you are going to enter non-system Objective C code." Then I send the event that causes the problem.
I understand what you are asking, but have you tried using an Exception Breakpoint? This will basically act like an auto-inserted breakpoint on the piece of code that throws the exception. If that doesn't work for you, try a symbolic breakpoint
If you want to intercept UI events, you can try subclassing UIWindow and overriding its sendEvent: method, then setting this class as the class of the UIWindow object in your main XIB file. sendEvent: will be called each time the user generates a touch event. Unfortunately, at this point you cannot yet know which UI object will finally consume the event (read: which event handler code will be ultimately called) since that depends on the actual state of the responder chain. But anyway, you can use this method to inject events into the system.
Related
I'm creating a SwiftUI multiplatform app in XCode and I have a section of my code that hangs. I want to update the user so they know what's happening and are willing to wait. I originally planned to have an alert with text that changed and then planned to have a Text element that updated. In both cases this isn't shown until after the code executes and as such only shows the final message (normally the success done message unless an error made it end sooner).
Is there anyway I can have a visible message to the user through an alert or SwiftUI element that is updated right away and thus will be helpful?
The fact that the alert isn't even shown until after the code executes is bad and incorrect. This suggests that you are doing something lengthy on the main thread, and that is an absolute no-no. You are freezing the interface and you risk the WatchDog process crashing your app before the user's very eyes. If something takes time, do it in the background.
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.
Sorry if the questions is asked/answered already, or if my title is poorly worded.
I am currently writing an iPhone app, and have considered a useful tool for debugging. I would like to write a method that just prints variables and other info that I want. That part is simple and all, but I want this to be called by keystroke.
For now I have been just adding NSLog()'s to viewDidLoad or to other button methods to check if my variables are being set properly, but it's becoming tedious and the code is long so I tend to forget about some and spend a lot of time looking for them haha.
I just want one method that I can go to to write my NSLog()'s, and have that method call whenever I hit 'space' or something of that sort.
Can this be done?
Thanks!
-SF
It's pretty hard to do, you need to have a hidden text field that you keep in focus, hide the software keyboard for it, then listen on its delegate methods for changing text.
An alternative would trigger the code inside applicationWillResignActive: which gets called on application hiding, notification center showing, or double home press.
I'm automating an app that shows some overlay messages anywhere on the app for several scenarios, such as app installed for the first time etc. (I'm fairly new to Robotium too.)
The overlay displays a text that goes away by swiping or clicking on it. Also, there are different types of these overlays with different unique text on it. (let's call it Activity A)
I wanted to create a robust test case that handles this case gracefully. From the test's perspective we won't know that the activity A will be present all the time. But I want to recover from the scenario if it does, by writing a method that I can call any time. Currently, the tearDown method gets called since my expected activity name doesn't match.
Also, even if the activity A exists, there are other predefined overlay texts too. So, if I use solo.waitForText("abc") to check for text "abc", I may see the overlay 2 with the text "pqr" instead.
So I was looking for a way to automate this, and I can't use solo.assertCurrentActivity() or solo.waitForActivity methods as they just stop the execution after the first failure.
So any guidance is appreciated!
All the waitFor methods return a boolean. So you can use waitForActivity() exactly as you want to. If the Activity doesn't exist it will return false.
You can check which Activity is current:
Activity current = solo.getCurrentActivity();
In our project, we're using gtkmm and we have several classes that extend Gtk::Window in order to display our graphical interface.
I now found out what call produces the behaviour (described in the previous revision. The question now slightly changed.)
We're displaying one window, works like a charm.
Then, we have a window which displays various status messages. Let's call it MessageWindow. It has a method setMessage(Glib::ustring msg) which simply calls a label's set_text().
After some processing, we hide this window again and we now show a toolbar. Just yet another simple window, nothing crazy.
For all windows applies: The main thread calls show() on the window and creates a new thread which calls Gtk::Main::run() (without argument).
That's how it should be, until now.
The problem starts here: The main thread now wants to call MessageWindow::setMessage("any string"). a) if I call this method, the message window reacts completely correctly. But afterwards, the toolbar-window is displayed empty. b) if I don't call it, the message window doesn't change the label (which is absolutely clear), and the toolbar window is displayed as it should.
Seems like the windows are messing up each other.
Now the question:
If my gui-thread is blocking in Gtk::Main::run(), how can I now change the text of a label?
We're using gtkmm-2.4 (and no, we cannot upgrade)
Any help is appreciated.
Wow! That's complicated...
First: you should not manipulate windows from several threads. That is you should have just one GUI thread that does all the GUI work, and let the other threads communicate with it.
It is theoretically possible to make it work (in Linux; in Windows it is impossible) but it is more trouble than it is worth.
Second: the line Gtk::Main main(argc, argv) is not a call, it is an object declaration. The object main should live for the duration of the program, so if you use it in a object constructor, as soon as you return from it, the object will be destroyed! Just put it at the top of the main function and forget about it.
UPDATE: My usual approach here is to create a pipe, a g_io_channel to read, and write bytes on the other end.
Other option, although I didn't test it is to call get the GMainContext of the main thread and then g_idle_source_new() and attach that source to the main context with g_source_attach(). If you try this one and it works, please post your result here!