Folding of Scala imports in Vim - scala

I'm trying to have my Vim setup automatically fold import statements when I open a Scala file. This is similar to what Intellij does when you open a file. I currently have the following lines in my .vimrc file from the wiki.
set foldmethod=syntax
set foldenable
syn region foldImports start=/(^\s*^import\) .\+/ end=/^\s*$/ transparent fold keepend
However when I open a .scala file it doesn't fold the imports but the body of objects. I am also using the vim-scala plugin. Thanks for the help!

You were pretty close to getting this to work. There are a few funky factors at play that we should consider.
setting foldmethod to syntax (btw this is not documented on learn Vimscript the Hardway..so :help foldmethod was key to figure this out)
SYNTAX fold-syntax
A fold is defined by syntax items that have the "fold" argument.
|:syn-fold|
The fold level is defined by nesting folds. The nesting of folds is
limited with 'foldnestmax'.
Be careful to specify proper syntax syncing. If this is not done
right, folds may differ from the displayed highlighting. This is
especially relevant when using patterns that match more than one line.
In case of doubt, try using brute-force syncing:
:syn sync fromstart
The main thing to note is the sync fromstart this is a useful helper if you have regex that would match throughout the file and only want to catch the header. In your case you should be able to ignore this but just something to be aware of.
top down regex scanning
Since the import block is fairly predictable we can simplify the start and end to look something like this:
syn region foldImports start="import" end=/import.*\n^$/ fold keepend
Since the region is just looking for some string to start the matching on we can just use "import"(or /import/) and then for the end value we want to use a little bit more carefully crafted statement. The key is that we want have the end be the last line of the import with a blank line following it (/import.*\n^$/)
Hopefully that does the trick for you (I do not work with scala so you may have to adjust the regex a bit as needed)

Related

Only autocomplete on an exact match in Sublime Text 2

I'm making a custom .tmLanguage file to highlight the syntax I'm using correctly and generally make coding with it easier. I'm almost done, and I got the autocompletion working using a .sublime-completions file.
There's just one minor flaw I'd like to change. I have a pretty long list of functions, and almost all of them contain an abbreviation of the word 'parameter', PAR. When I start typing that word, the following are all in the list of completions:
PAR command
DEFPAR command
JDATA command (because the description contains PAR)
SPAA command (because there's a P in the command and an A and an R in the description)
What I want is only for the commands that begin with PAR to show up, so from the list above, only the first item.
So, like this:
In other words, I want the completions to show up based on the literal string I'm typing, and only from the trigger part of my completions file, before the \t only.
That completions file looks like this:
Highlighted in orange is what I want my completions list to be based on.
I hope this is understandable. Any help is greatly appreciated.
This is not possible. By design Sublime's autocomplete feature uses fuzzy matching, so if there are a number of options that all contain the same pattern, but you don't quite remember which one you want, you can type the pattern and have all of the options available. The more you type, the smaller the list of possible options becomes. This is a good thingĀ®, otherwise you'd have to remember the exact command you're looking for, which kind of defeats the purpose of autocomplete and code hinting.

Highlighting line which are not correctly formated [Eclipse]

So I have some formatting rule to follow, such as :
Space on each side of operator (*, =, +, %, etc)
No space at the end of a line
No more than 80 chars per line
Is there a way to highlight in red line containing formating error?
The eclipse auto-formating tool is no good because either :
It will change to many line (old code not written by me)
or it won't (only my code)
Because I must follow some "colorfull" guideline :
You must change formating error relative to operators in old code but nothing else
Your code must be correctly formated.
Any ideas?
Thanks
You can select which lines of code you want to format. The Eclipse formatting tool doesn't have to run across the entire file. To do this: select the lines you want to format, then press Ctrl-Shift-F.
You could try using the Eclipse Checkstyle Plugin.
You'll need to configure it with just the rules that you need (the default configuration is very strict, so create a new one with just the rules you care about).
This will highlight all lines with formatting issues. I don't think it's possible to ignore old code using the plugin.
Talk to whoever created that coding guideline. It does not make sense in the long run, because editing code in Eclipse will always apply all current formatting rules (which violates that guideline) or none, if you disable the formatter (which leads to you writing bad code).
If there is really no way around that guideline, then you should split your workflow into 2 phases: Reformat all existing code one time to fulfill that operator guideline. You may use any tool you like, even just a regular expression search and replace might be fine.
After that has been done, configure Eclipse to auto-format only changed lines, but always apply all formattings to each changed line. There is no good reason to not re-format the other 75 characters in an existing line of code, if you already touched 5 characters of it.

How to run an external file from within the scala interactive interpreter (REPL)?

I have a file with several lines of scala code -- imports, list value assignments, etc. that I often use to initialize some things when using the REPL.
Currently I just open up the file in a text editor and copy-and-paste it into the REPL, but is there a way to do it by calling the external file in a more direct manner so I don't have to copy-and-paste every time?
In some interactive database tools like SQL Plus, it is done by typing #filename at the prompt. Is there something similar in the Scala REPL? Preceding the filename with # doesn't work, eval doesn't work either.
Type:
:help
and you see, that
:load <path> load and interpret a Scala file
solves your problem.
In some circumstances, pasting the code might be preferable though, but then
:paste
might be your friend then. It helps inserting a whole block without feedback, line by line, until you hit Ctrl + D. In some cases this is significant for the code interpretation.

Using regexp to index a file for imenu, performance is unacceptable

I'm producing a function for imenu-create-index-function, to index a source code module, for csharp-mode.el
It works, but delivers completely unacceptable performance. Any tips for fixing this?
The Background
I looked at js.el, which is the rebadged "espresso" now included, since v23.2, into emacs. It indexes Javascript files very nicely, does a good job with anonymous functions and various coding styles and patterns in common use. For example, in javascript one can do:
(function() {
var x = ... ;
function foo() {
if (x == 1) ...
}
})();
...to define a scope where x is "private" or inaccessible from other code. This gets indexed nicely by js.el, using regexps, and it indexes the inner functions (anonymous or not) within that scope also. It works quickly. A big module can be indexed in less than a second.
I tried following a similar approach in csharp-mode, but it's quite a bit more complicated. In Js, everything that gets indexed is a function. So the starting regex is "function" with some elaboration on either end. Once an occurrence of the function keyword is found, then there are 4 - 8 other regexps that get tried via looking-at - the number depends on settings. One nice thing about js mode is that you can turn on or off regexps for various coding styles, to speed things along I suppose. The default "styles" work for most of the code I tried.
This doesn't work in csharp-mode. It works, but it performs poorly enough to make it not very usable. I think the reason for this is that
there is no single marker keyword in C#, as function behaves in javascript. In C# I need to look for namespace, class, struct, interface, enum, and so on.
there's a great deal of flexibility with which csharp constructs can be defined. As one example, a class can define base classes as well as implemented interfaces. Another example: The return type for a method isn't a simple word-like string, but can be something messy like Dictionary<String, List<String>> . The index routine needs to handle all those cases, and capture the matches. This makes it run sloooooowly.
I use a lot of looking-back. The marker I use in the current approach is the open curly brace. Once I find one of those, I use looking-back to determine if the curly is a class, interface, enum, method, etc. I read that looking-back can be slow; I'm not clear on how much slower it is than, say, looking-at.
once I find an open-close pair of curlies, I call narrow-to-region in order to index what's inside. not sure if this is will kill performance or not. I suspect that it is not the main culprit, because the perf problems I see happen in modules with one namespace and 2 or 3 classes, which means narrow gets called 3 or 4 times total.
What's the Question?
My question is: do you have any tips for speeding up imenu-like indexing in a C# buffer?
I'm considering:
avoiding looking-back. I don't know exactly how to do this because when re-search-forward finds, say, the keyword class, the cursor is already in the middle of a class declaration. looking-back seems essential.
instead of using open-curly as the marker, use the keywords like enum, interface, namespace, class
avoid narrow-to-region
any hard advice? Further suggestions?
Something I've tried and I'm not really enthused about re-visiting: building a wisent-based parser for C#, and relying on semantic to do the indexing. I found semantic to be very very very (etc) difficult to use, hard to discover, and problematic. I had semantic working for a while, but then upgraded to v23.2, and it broke, and I never could get it working again. Simple things - like indexing the namespace keyword - took a very long time to solve. I'm very dissatisfied with it and don't want to try again.
I don't really know C# syntax, and without looking at your elisp it's hard to give an answer, but here goes anyway.
looking-back can be deadly slow. It's the first thing I'd experiment with. One thing that helps a lot is using the limit arg to, say, restrict your search to the beginning of the current line. A different approach is when you hit the open curly do backward-char then backward-sexp (or whatever) to get to the front of the previous word, then use looking-at.
Using keywords to search around instead of open curly is probably what I would have done. Maybe something like (re-search-forward "\\(enum\\|interface\\|namespace\\|class\\)[ \t\n]*{" nil t) then using match-string-no-properties on the first capture group to see which of the keywords was found. This might help with the looking-back problem as well.
I don't know how expensive narrow-to-region is, but could be avoided by when you find a open curly do save-excursion forward-sexp and keep point as a limit for the current iteration of your (I assume recursive) searches.

Why are Perl source filters bad and when is it OK to use them?

It is "common knowledge" that source filters are bad and should not be used in production code.
When answering a a similar, but more specific question I couldn't find any good references that explain clearly why filters are bad and when they can be safely used. I think now is time to create one.
Why are source filters bad?
When is it OK to use a source filter?
Why source filters are bad:
Nothing but perl can parse Perl. (Source filters are fragile.)
When a source filter breaks pretty much anything can happen. (They can introduce subtle and very hard to find bugs.)
Source filters can break tools that work with source code. (PPI, refactoring, static analysis, etc.)
Source filters are mutually exclusive. (You can't use more than one at a time -- unless you're psychotic).
When they're okay:
You're experimenting.
You're writing throw-away code.
Your name is Damian and you must be allowed to program in latin.
You're programming in Perl 6.
Only perl can parse Perl (see this example):
#result = (dothis $foo, $bar);
# Which of the following is it equivalent to?
#result = (dothis($foo), $bar);
#result = dothis($foo, $bar);
This kind of ambiguity makes it very hard to write source filters that always succeed and do the right thing. When things go wrong, debugging is awkward.
After crashing and burning a few times, I have developed the superstitious approach of never trying to write another source filter.
I do occasionally use Smart::Comments for debugging, though. When I do, I load the module on the command line:
$ perl -MSmart::Comments test.pl
so as to avoid any chance that it might remain enabled in production code.
See also: Perl Cannot Be Parsed: A Formal Proof
I don't like source filters because you can't tell what code is going to do just by reading it. Additionally, things that look like they aren't executable, such as comments, might magically be executable with the filter. You (or more likely your coworkers) could delete what you think isn't important and break things.
Having said that, if you are implementing your own little language that you want to turn into Perl, source filters might be the right tool. However, just don't call it Perl. :)
It's worth mentioning that Devel::Declare keywords (and starting with Perl 5.11.2, pluggable keywords) aren't source filters, and don't run afoul of the "only perl can parse Perl" problem. This is because they're run by the perl parser itself, they take what they need from the input, and then they return control to the very same parser.
For example, when you declare a method in MooseX::Declare like this:
method frob ($bubble, $bobble does coerce) {
... # complicated code
}
The word "method" invokes the method keyword parser, which uses its own grammar to get the method name and parse the method signature (which isn't Perl, but it doesn't need to be -- it just needs to be well-defined). Then it leaves perl to parse the method body as the body of a sub. Anything anywhere in your code that isn't between the word "method" and the end of a method signature doesn't get seen by the method parser at all, so it can't break your code, no matter how tricky you get.
The problem I see is the same problem you encounter with any C/C++ macro more complex than defining a constant: It degrades your ability to understand what the code is doing by looking at it, because you're not looking at the code that actually executes.
In theory, a source filter is no more dangerous than any other module, since you could easily write a module that redefines builtins or other constructs in "unexpected" ways. In practice however, it is quite hard to write a source filter in a way where you can prove that its not going to make a mistake. I tried my hand at writing a source filter that implements the perl6 feed operators in perl5 (Perl6::Feeds on cpan). You can take a look at the regular expressions to see the acrobatics required to simply figure out the boundaries of expression scope. While the filter works, and provides a test bed to experiment with feeds, I wouldn't consider using it in a production environment without many many more hours of testing.
Filter::Simple certainly comes in handy by dealing with 'the gory details of parsing quoted constructs', so I would be wary of any source filter that doesn't start there.
In all, it really depends on the filter you are using, and how broad a scope it tries to match against. If it is something simple like a c macro, then its "probably" ok, but if its something complicated then its a judgement call. I personally can't wait to play around with perl6's macro system. Finally lisp wont have anything on perl :-)
There is a nice example here that shows in what trouble you can get with source filters.
http://shadow.cat/blog/matt-s-trout/show-us-the-whole-code/
They used a module called Switch, which is based on source filters. And because of that, they were unable to find the source of an error message for days.