The interactive window of vscode has a tab called Jupyter:Variables that allows me to watch on real time the variable of my code. However, sometimes there are too many variables or I just need to see in detail a single one on real time (that may be too long to display correctly in the Jupyter:Variables tab). Although double clicking on the variable allows me to see its contents, they are not automatically updated when I run another cell which defeats the purpose of having a watch.
Is there any way (integrated or with extensions) to have a watch for a single variable or a selection or variables in the interactive window, similar to the debugger watch (shown in the image)?
Related
as the pic shows, only the variables local to the function are shown and the current thread(humanPlayer)
i havent knowingly changed any settings?
im struggling to debug my code for the last few hours and id love help on how to get this back to normal
this usually opens up the actual object and i can peek inside and see its variables which is how i usually go about debugging
it usually shows everything thats in the current class rather than function
For example; go ahead and open a terminal window and type:
man ls
Have a look around; notice that you can move up and down through text with the arrow keys and that the terminal history is now missing.
Use "q" to quit.
The terminal history is now returned.
I'm trying to understand how to write an interface that does the above: with the terminal history disappearing and reappearing on exit and most importantly with the interface being able to take key presses as input instead of using readLine().
On the screen clearing front I've managed an ANSI escape code:
print("\u{001B}[2J")
Which clears the screen and starts the cursor from the bottom but is only really implementing newlines and pushing the old content off the screen unlike the man pages which remove the scrollable history. So currently not what I'm trying to achieve.
As for taking key presses as input, I haven't been able to find much but Foundation has 3 references to:
standardInput
The Developer Documentation lists them in FileHandle, NSUserUnixTask & Process. I'm hoping perhaps one of them may be able to listen for key presses and then respond with a notification that I can use to update my interface with the correct screen clear and repositioning of text or perform an action (like quitting and returning to the normal prompt as "man" does with the press of "q").
Would love to have some help on this one, thanks!
The standard macOS Cocoa GUI uses events; representing mouse clicks, keyboard presses, etc.; to communicate user input to your app. Read up on event handling and you can make your app quit which the user types a "q". That is the underlying mechanism for keyboard input and the alternative to your readLine().
However what you are describing is more involved than just key press handling, you want terminal-like screen control; scrolling backwards, clearing the screen, etc. The original way of doing this on Unix was to use low-level tty device interfaces and issue control codes the screens themselves understood. The modern was is often to implement these same control codes in your Cocoa interface, this allows the output of the standard Unix-level commands; e.g. man et al; to be interpreted. This is what Terminal.app does. The standard Cocoa text controls also implement support for some of this, in particular a subset of Emacs-style cursor movement is supported.
The open source iTerm2 is an Objective-C alternative to Terminal.app, the source is available on GitHub. If you read this you can learn how to implement a terminal window interface in Cocoa in Objective-C, you'll have to figure out the Swift equivalent.
HTH
I am making a cross-platform windowing layer. When making window relationship stuffs, I got some trouble on window modality.
I have read the official spec: Application Window Properties, and some related topics like this: X11 modal dialog. It seems not sufficient to only set transient-for, but _NET_WM_STATE_MODAL is also required. So I tried to make small programs that apply this property along with transient-for.
I firstly made the program that create the window using SDL2, and use X11 stuffs using the fetched native window handle. But I did not observe any behavior change after the _NET_WM_STATE_MODAL attribute is set: the transient-for target window is still receiving mouse button events, which is not like a modal-blocked parent window that cannot operated by user.
To avoid potential evil stuffs done by SDL2, I further made the test program using GDK3, which provides ready-to-use wrapper functions. The behavior is same as the SDL2 program.
As I did not observed any change before/after _NET_WM_STATE_MODAL is set, what is the expected behavior of that property?
As I did not observed any change before/after _NET_WM_STATE_MODAL is set, what is the expected behavior of that property?
That's a question we cannot answer. It's a hint for the window manager to indicate modality, but, as in most cases, it is up to the window manager to decide what to do with this hint.
In other words behavior depends entirely upon the window manager and you haven't stated which window manager you were testing with.
Furthermore, this hint requires the window manager to be EWMH-compliant, which not all of them are or aren't fully. You can use _NET_SUPPORTED on the root window to see a list of atoms the window manager claims to support. If _NET_WM_STATE_MODAL isn't listed there, chances are the window manager really doesn't implement this hint at all. If it is listed, the window manager claims to support it, but a) it might be lying (let's not assume that, though) and b) behavior is up to the window manager.
I'm using the debugger to pause execution of my program at any time and view the state of the running code, so I set breakpoints before running my executable so that I can stop at known points and view the values of variables in my source code.
After I viewed my code, it comes to one new screen. Here I press "step over" button means it come to view the next line, if I press "Continue program execution" button means then it skipped the step by step execution and comes to execution part. Image shown below.
My doubt is, why the compiler come here after viewed my code? How to analyse this assembly language coding and what is the purpose of this code?
If you pause execution or a breakpoint is triggered, the debug area opens, displaying the values of variables and registers plus the debug console. You can use the buttons at the right end of the debug area toolbar to display both the variables and console panes or to hide either one.
The variables pane displays variables and registers. You specify which items to display using the pop-up menu in the top-left corner of the variables pane:
Auto displays only the variables you’re most likely to be interested
in, given the current context.
Local displays local variables.
All displays all variables and registers.
Use the search field to filter the items displayed in the variables pane.
The console pane displays program output and lets you enter commands to the debugger tool. You specify the type of output the console displays with the pop-up menu in the top-left corner of the console pane:
All Output displays target and debugger output.
Debugger Output displays debugger output only.
Target Output displays target output only.
Use these to understand what is happening at break points.
Maybe your code casted exception and goes back to [UIViewController loadViewIfRequired] method . This method is in the compiled program and it is binary now so you won't see the source code and the assembly language is presented instead.
It is possible that [UIViewController loadViewIfRequired] has exception handling code.
I have a rather long perltk code in my hand and I would like to run the simulation in a batch mode (without using the GUI). e.g. I would like to run it with script like "myprog.pl -b" in stead of setting all the parameters in the GUI and click buttons.
My current method is using a separated XML file for config and the function "after" which means the GUI will pop-out and start the simulation then exit after sometime. It is now working, but I have a question: is there a better way solve this problem? Is that possible to have the GUI shown in the background (so we wont see it) in stead of pop-out?
Change the program so it is accessible from both a graphical and command-line interface. Factor out its real functionality into subroutines.
Run the program in an xvfb so that no window is shown on the main display.
Configure the window manager to always start instances of this program minimised and/or with a 0x0 size.