Is it possible in VSCode to peek the declaration (or first mention) of a certain variable in unsupported languages?
I work a lot with Matlab, and in long scripts it would be helpful if I could have a little peek window (as in e.g. C, Python) to see the first definition of the variable. Of course this variable could have been changed in the script (and without proper symbols there is probably not really a way to detect this), but let's assume that it isn't.
To illustrate what I mean with a Python example:
Related
as the title said, I am learning common lisp right now and using portacle, following Practical Common Lisp by Peter Seibel.
I found it quite annoying that the Slime-repl-sbcl buffer keep the writing at the end of the screen (using C-l or C-v doesn't help since once I try to execute an expression it will roll back to the end of the screen)
Is there anywhere to improve this? (should I just write on a text file and compile it? the only similar subject I found was about Cider repl and couldn't understand it, since I am still new to lisp)
Thank you for your time
I would like this fixed too. No solution yet. In slime-repl.el, I found:
scroll-conservatively (variable):
A value of zero means always recenter point if it moves off screen.
my test wasn't conclusive.
slime-display-output-buffer (function), which calls slime-repl-show-maximum-output, which role is to
Put the end of the buffer at the bottom of the window.
I rewrote slime-display-output-buffer without this call, but that wasn't conclusive either.
Maybe I tested badly.
(I'm making this answer a wiki)
You would indeed typically write in a source file, and compile each expression separately. Use the REPL only to test functions or do simple computations. To compile functions (or really, any toplevel expression), use C-c C-c - bound to slime-compile-defun by default - when the point (= your cursor) is inside the function's code. The REPL will then "know" of it, so you can test it there, but as it is now written in file, you can also modify it without having to copy/paste anything ! Just make sure to recompile functions that you modify !
If you want to compile/load entire files at once, look at the other compilation commands, e.g. slime-compile-and-load-file (see the SLIME manual, and its Compilation section)
For your problem: there is Emacs variable, named comint-scroll-to-bottom-on-input (or something along those lines, can't remember exactly ...) which enables the behaviour you are seeing, so that you don't have to scroll back to enter new expressions. It is possible that SLIME has another variable which configuring this behaviour for its REPL; in that case, it would probably be named almost the same, and you can set it to nil to disable it.
Finally, don't hesitate to look at the other tools provided by SLIME ! For example, it comes with an "inspector" (see the relevant section), that you can use instead of evaluating expressions such as *db* in the REPL. In that simple case, it makes no real difference, but if you start having - say - hash-tables or different structures/classes, it becomes an incredible tool for interactive development, to examine the internal of almost everything, redefine things directly from within the inspector without needing complex accessors, and so on.
Simple question here, just can't seem to pass it google in a way it can understand.
Say I wanted to execute a line of actual programming code (c++ or java or python... etc) like SetCursorPos or printf from the command prompt command line. I vaguely imagine I would have to invoke the compiler and pass the command to it like a parameter, where from it would then be converted into machine language and passed to... where exactly?
Okay so that was kind of two questions.
How to run actual code from the command line and
what exactly is happening when a fully compiled program, or converted line of code (presuming these are essentially binary containers at that point), is executed?
Question one takes priority obviously. Unfortunately, I can not find any documentation on it, just a bunch of stuff vaguely related to it.
How to run actual code from the command line
Without delving into the vast amounts of blurriness between them, there are two major categories of language implementations: interpreters and compilers.
With many interpreters (or implementations with implicit compilation, such as V8 JavaScript's jit compiler, or pretty much anything with a repl), running a single line from the command line should be fairly trivial. CPython (the standard implementation of Python) has the -c command option:
$ python -c 'print("Hello, world!")'
Hello, world!
Language implementations with explicit compilation steps will tend to be decidedly less simple. In particular, the compiler would need to either accept source either from directly out of the argument list, or from standard input (via piping or redirection). On the output side, your compiler would have to support immediately executing that program, or outputting it to standard out, so that an operating system feature (if it exists) can execute it from a pipe.
To my knowledge, most explicit compilers are not designed with such usage in mind. In such cases, your best bet is to see if there is a REPL available for the language in question, preferably one as compatible with your compiler as possible, or to create (or find) a wrapper that makes it look like your language has a REPL. The wrapper would:
Accept input along the lines of CPython above.
Create a temporary source file behind the scenes with the code to be run and any necessary boilerplate.
Pass that file to the compiler.
Automatically run the resulting executable.
Delete the source file and executable. These may be cleaned up by the operating system later instead, if they're in a temp directory.
From the point of view of the user, this should look pretty similar to the CPython example, as they wouldn't have to interact with or see the compiler or temporary files.
I'm learning about gcc's cleanup attribute, and learning how it calls a function to be run when a variable goes out of scope, and I don't understand why you can use the word "cleanup" with or without underscores. Where is the documentation for, or documentation of, the version with underscores?
The gcc documentation above shows it like this:
__attribute__ ((cleanup(cleanup_function)))
However, most code samples I read, show it like this:
__attribute__ ((__cleanup__(cleanup_function)))
Ex:
http://echorand.me/site/notes/articles/c_cleanup/cleanup_attribute_c.html
http://www.nongnu.org/avr-libc/user-manual/atomic_8h_source.html
Note that the first example link states they are identical, and of course coding it proves this, but how did he know this originally? Where did this come from?
Why the difference? Where is __cleanup__ defined or documented, as opposed to cleanup?
My fundamental problem lies in the fact that I don't know what I don't know, therefore I am trying to expose some of my unknown unknowns so they become known unknowns, until I can study them and make them known knowns.
My thinking is that perhaps there is some globally-applied principle to gcc preprocessor directives, where you can arbitrarily add underscores before or after any of them? -- Or perhaps only some of them? -- Or perhaps it modifies the preprocessor directive or attribute somehow and there are cases where one method, with or without the extra underscores, is preferred over the other?
You are allowed to define a macro cleanup, as it is not a name that is reserved to the compiler. You are not allowed to define one named __cleanup__. This guarantees that your code using __cleanup__ is unaffected by other code (provided that other code behaves, of course).
As https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html#Attribute-Syntax explains:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
(But note that attributes are not preprocessor directives.)
I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)
I can read the documentation, so I'm not asking for a cut-and-paste of that.
I'm trying to understand the motivation for this function.
When would I want to use it?
The documentation in the Emacs lisp manual does have some example situations that seem to answer your question (as opposed to the doc string).
From looking at the Emacs source code, eval-and-compile is used to quiet the compiler, to make macros/functions available during compilation (and evaluation), or to make feature/version specific variants of macros/functions available during compilation.
One usage I found helpful to see was in ezimage.el. In there, an if statement was put inside the eval-and-compile to conditionally define macros depending on whether the package was compiled/eval'ed in Emacs or XEmacs, and additionally whether a particular feature was present. By wrapping that conditional inside the eval-and-compile you enable the appropriate macro usage during compilation. A similar situation can be found in mwheel.el.
Similarly, if you want to define a function via fset and have it available during compilation, you need to have the call to fset wrapped with eval-and-compile because otherwise the symbol -> function association isn't available until the file is evaluated (because compilation of a call to fset just optimizes the assignment, it doesn't actually do the assignment). Why would you want this assignment during compilation? To quiet the compiler. Note: this is just my re-wording of what is in the elisp documentation.
I did notice a lot of uses in Emacs code which just wrapped calls to require, which sounds redundant when you read the documentation. I'm at a loss as to how to explain those.