I implemented a small function, which parses an SQL INSERT statement and highlights a column value when a cursor is on a column name and vice versa.
Then I wanted to add a possibility to quickly jump between column name and column value. I used push-mark in my implementation, so I can jump with C-x C-x (exchange-point-and-mark). It works too, the only thing which bothers me is a elisp doc, which says
Novice Emacs Lisp programmers often
try to use the mark for the wrong
purposes. The mark saves a location
for the user's convenience. Most
editing commands should not alter the
mark.
My usage of mark - is it correct? Or what would be a better solution?
Consider an analogy with the position of point: the user only wants point to move when they issue a point-moving command. It would be exceedingly annoying if random operations like font-locking moved the point. Hence the recommendation to wrap function bodies in (save-excursion ...).
If your function sets the mark explicitly for the user, that's fine. (In this case I suggest calling your function something like sql-mark-column-value to make it clear that setting the mark is one of the things it does.) The point of the documentation you quoted is that commands should not set the mark incidentally as a result of doing something else.
If your function just happens to set the mark when the user places point on a a column name in a SQL statement, that's probably not so convenient. Imagine a use case of someone trying to cut or copy a section of a SQL statement; every time they try to move point within the statement their mark gets clobbered! For this use case you probably want to provide a separate command like sql-goto-column-value instead of relying on exchange-point-and-mark.
Of course, if this is purely for your own use, anything goes.
AFAIK, this is the correct use of push-mark. I think the documentation discourages the use of push-mark instead of save-excursion for the purpose of saving the state of the buffer while performing certain operations.
Related
as the title said, I am learning common lisp right now and using portacle, following Practical Common Lisp by Peter Seibel.
I found it quite annoying that the Slime-repl-sbcl buffer keep the writing at the end of the screen (using C-l or C-v doesn't help since once I try to execute an expression it will roll back to the end of the screen)
Is there anywhere to improve this? (should I just write on a text file and compile it? the only similar subject I found was about Cider repl and couldn't understand it, since I am still new to lisp)
Thank you for your time
I would like this fixed too. No solution yet. In slime-repl.el, I found:
scroll-conservatively (variable):
A value of zero means always recenter point if it moves off screen.
my test wasn't conclusive.
slime-display-output-buffer (function), which calls slime-repl-show-maximum-output, which role is to
Put the end of the buffer at the bottom of the window.
I rewrote slime-display-output-buffer without this call, but that wasn't conclusive either.
Maybe I tested badly.
(I'm making this answer a wiki)
You would indeed typically write in a source file, and compile each expression separately. Use the REPL only to test functions or do simple computations. To compile functions (or really, any toplevel expression), use C-c C-c - bound to slime-compile-defun by default - when the point (= your cursor) is inside the function's code. The REPL will then "know" of it, so you can test it there, but as it is now written in file, you can also modify it without having to copy/paste anything ! Just make sure to recompile functions that you modify !
If you want to compile/load entire files at once, look at the other compilation commands, e.g. slime-compile-and-load-file (see the SLIME manual, and its Compilation section)
For your problem: there is Emacs variable, named comint-scroll-to-bottom-on-input (or something along those lines, can't remember exactly ...) which enables the behaviour you are seeing, so that you don't have to scroll back to enter new expressions. It is possible that SLIME has another variable which configuring this behaviour for its REPL; in that case, it would probably be named almost the same, and you can set it to nil to disable it.
Finally, don't hesitate to look at the other tools provided by SLIME ! For example, it comes with an "inspector" (see the relevant section), that you can use instead of evaluating expressions such as *db* in the REPL. In that simple case, it makes no real difference, but if you start having - say - hash-tables or different structures/classes, it becomes an incredible tool for interactive development, to examine the internal of almost everything, redefine things directly from within the inspector without needing complex accessors, and so on.
I have helm-find-files bound to C-x C-f since it provides a much more convenient way to open files. Unfortunately for me, if point is currently inside something that looks like a filename then that alters helm-find-files's behaviour, often changing its current directory. (See https://github.com/emacs-helm/helm/issues/1178 )
Is there a straightforward way for me to nobble thing-at-point inside helm-find-files-initial-input so that it never believes point is inside a filename? I thought that defadvice would help but I'm having trouble working out how.
Or perhaps there's a better way to make helm-find-files behave consistently no matter where point is without modifying its implementation directly?
You can set helm-find-file-ignore-thing-at-point to t.
Documentation:
Use only ‘default-directory’ as default input in ‘helm-find-files’.
I.e text under cursor in ‘current-buffer’ is ignored.
Note that when non-nil you will be unable to complete filename at point
in ‘current-buffer’.
I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)
I'm looking for a way to generate and insert header comment blocks above my functions in Emacs (in any mode), with the default contents of the comment automatically based on the function's signature (i.e. the correct number of #param place-holders).
Doxymacs is a nice candidate. But I prefer another way works without the necessary libs. Can anyone recommend some others ways for adding smart comments for functions in Emacs? Thanks.
Edit:
Now I found this: http://nschum.de/src/emacs/doc-mode/, but it seems that it does not work well after I require it into my .emacs and add hook for js-mode. Doesn't it support js functions ?
I don't know of any general-purpose approach.
Csharp-mode has a defun that is bound to / , which tries to generate comments appropriate for C#. The way it works: Every time you type a slash, it looks to see if it is the third slash in a row. (In C#, three slashes are used to denote comments that produce documentation). If it is the third slash, then it looks at the surrounding text and inserts a comment skeleton or fragment that is appropriate.
It is not generalized in any way to support javascript or other language syntaxes. But you might be able to build what you want, if you start with that.
here's the excerpt:
http://pastebin.com/ATCustgi
I've used doxymacs in the past and I've found it useful
http://doxymacs.sourceforge.net/
I'm producing a function for imenu-create-index-function, to index a source code module, for csharp-mode.el
It works, but delivers completely unacceptable performance. Any tips for fixing this?
The Background
I looked at js.el, which is the rebadged "espresso" now included, since v23.2, into emacs. It indexes Javascript files very nicely, does a good job with anonymous functions and various coding styles and patterns in common use. For example, in javascript one can do:
(function() {
var x = ... ;
function foo() {
if (x == 1) ...
}
})();
...to define a scope where x is "private" or inaccessible from other code. This gets indexed nicely by js.el, using regexps, and it indexes the inner functions (anonymous or not) within that scope also. It works quickly. A big module can be indexed in less than a second.
I tried following a similar approach in csharp-mode, but it's quite a bit more complicated. In Js, everything that gets indexed is a function. So the starting regex is "function" with some elaboration on either end. Once an occurrence of the function keyword is found, then there are 4 - 8 other regexps that get tried via looking-at - the number depends on settings. One nice thing about js mode is that you can turn on or off regexps for various coding styles, to speed things along I suppose. The default "styles" work for most of the code I tried.
This doesn't work in csharp-mode. It works, but it performs poorly enough to make it not very usable. I think the reason for this is that
there is no single marker keyword in C#, as function behaves in javascript. In C# I need to look for namespace, class, struct, interface, enum, and so on.
there's a great deal of flexibility with which csharp constructs can be defined. As one example, a class can define base classes as well as implemented interfaces. Another example: The return type for a method isn't a simple word-like string, but can be something messy like Dictionary<String, List<String>> . The index routine needs to handle all those cases, and capture the matches. This makes it run sloooooowly.
I use a lot of looking-back. The marker I use in the current approach is the open curly brace. Once I find one of those, I use looking-back to determine if the curly is a class, interface, enum, method, etc. I read that looking-back can be slow; I'm not clear on how much slower it is than, say, looking-at.
once I find an open-close pair of curlies, I call narrow-to-region in order to index what's inside. not sure if this is will kill performance or not. I suspect that it is not the main culprit, because the perf problems I see happen in modules with one namespace and 2 or 3 classes, which means narrow gets called 3 or 4 times total.
What's the Question?
My question is: do you have any tips for speeding up imenu-like indexing in a C# buffer?
I'm considering:
avoiding looking-back. I don't know exactly how to do this because when re-search-forward finds, say, the keyword class, the cursor is already in the middle of a class declaration. looking-back seems essential.
instead of using open-curly as the marker, use the keywords like enum, interface, namespace, class
avoid narrow-to-region
any hard advice? Further suggestions?
Something I've tried and I'm not really enthused about re-visiting: building a wisent-based parser for C#, and relying on semantic to do the indexing. I found semantic to be very very very (etc) difficult to use, hard to discover, and problematic. I had semantic working for a while, but then upgraded to v23.2, and it broke, and I never could get it working again. Simple things - like indexing the namespace keyword - took a very long time to solve. I'm very dissatisfied with it and don't want to try again.
I don't really know C# syntax, and without looking at your elisp it's hard to give an answer, but here goes anyway.
looking-back can be deadly slow. It's the first thing I'd experiment with. One thing that helps a lot is using the limit arg to, say, restrict your search to the beginning of the current line. A different approach is when you hit the open curly do backward-char then backward-sexp (or whatever) to get to the front of the previous word, then use looking-at.
Using keywords to search around instead of open curly is probably what I would have done. Maybe something like (re-search-forward "\\(enum\\|interface\\|namespace\\|class\\)[ \t\n]*{" nil t) then using match-string-no-properties on the first capture group to see which of the keywords was found. This might help with the looking-back problem as well.
I don't know how expensive narrow-to-region is, but could be avoided by when you find a open curly do save-excursion forward-sexp and keep point as a limit for the current iteration of your (I assume recursive) searches.