I'd like to be able to turn off a particular compiler warning, but only for a single file in my project. Is this possible?
The context is that I have a single source file that makes calls to an external macro library that produces adapted-arg warnings. I found that I can eliminate these warnings by changing my build.sbt file:
scalacOptions ++= Seq("-Xlint:-adapted-args,_" /*, ... */)
However, this turns off the warning globally, and I only want it off for the single file that raises them.
I haven't had any luck searching for any of the following possible solutions I thought might exist:
Specifying separate compilation options for different files in my project in my build.sbt file
Providing some pragma-like comment in my source file to change the warnings generated by the compiler, similar to the special scalastyle:on/off comments recognized by Scalastyle
Some annotation for smaller regions of code, like #unchecked
So, is there any way to have different linting options in effect for different files, or even for limited regions of code?
The expectation was that folks would write a custom Reporter that would filter out undesirable warnings.
It's easy to write one that filters by file name, perhaps given a whitelist.
The error/warn API supplies the textual Position. It would also be easy to parse the position.lineContent for a magic comment token like IGNORE or SUPPRESS, which is not as convenient as a SuppressWarnings annotation but is easy to implement.
The compiler asks the reporter if there were errors.
The custom reporter is specified with -Xreporter myclass, or it wouldn't surprise me if someone has written an sbt plugin.
Related
It often comes up during testing and debugging a Scala project built with sbt that I need to pass some extra compiler flags for a particular file. For example -Xlog-implicits to debug implicit resolution problems. However, changing scalacOptions either in build.sbt or the console invalidates the cache and causes the whole project / test suite to be recompiled. In addition to being annoying to wait so long, this also means that a lot of noise from irrelevant files is printed. Instead it would be better if I could compile a specific file with some extra flags from the sbt console, but I did not find a way to do this.
Problem
The reason why changing the scalac options triggers recompilation is because Zinc, Scala's incremental compiler, cannot possibly now which compiler flags affect the semantics of the incremental compilation, so it's pessimistic about it. I believe this can be improved, and some whitelisted flags can be supported, so that next time people like you don't have to ask about it.
Nevertheless, there's a solution to this problem and it's more generic than it looks like at first sight.
Solution
You can create a subproject in your sbt build which is a copy of the project you want to "log implicits" in, but with -Xlog-implicits enabled by default.
// Let's say foo is your project definition
lazy val foo = project.settings(???)
// You define the copy of your project like this
lazy val foo-implicits = foo
.copy(id = "foo-implicits")
.settings(
target := baseDirectory.value./("another-target"),
scalacOptions += "-Xlog-implicits"
)
Note the following properties of this code snippet:
We redefine the project ID because when we reuse a project definition the ID is still the same as the previous one (foo in this case). Sbt fails when there are projects that share the same id.
We redefine the target directory because we want to avoid recompilation. If we keep the same as before, then recompiling foo-implicits will delete the compilation products of the previous compilation (and viceversa). That's exactly what we want to avoid.
We add -Xlog-implicits to the Scalac options as you request in this question. In a generic solution, this piece of code should go away.
When is this useful?
This is useful not only for your use case, but when you want to have different modules of the same project in different Scala versions. The following has two benefits:
You don't use cross-compilation in sbt ++, which is known to have some memory issues.
You can add new library dependencies that only exist for a concrete Scala version.
It has more applications, but I hope this addresses your question.
I am looking in SBT for the same functionality Maven has for resource filtering, but does not come out of the box.
After searching quite a bit both here and elsewhere, i found two plugins close in functionality, but neither one really does it. For example, xsbt-filter does not filter tokens such as ${baseDirectory}, while sbt-editsource does not work in conjunction with unit nor integration testing (see issue 9)
So, i tried to code that myself by modifying one of those two plugins, but here is the question i was not able to figure out in SBT, being new to it (and not a Scala pro):
How do you reuse SBT build settings for doing token resolution?
Those settings are in object sbt.Keys. So for example, baseDirectory is:
val baseDirectory = SettingKey[File]("base-directory")
There are dozens of them (perhaps hundreds) that can be used for resolving tokens in a resource file.
In the end, for doing token resolution within the plugin code, you need a map Map[String, String] of all build settings present in Keys, i.e. key is "baseDirectory" and the value is whatever at compile time that value is.
I assume one way would be to use reflection, but before going down that path, i thought i asked if there was a more standard way of doing this from an sbt plugin, which seems fairly basic.
All plugins i have seen so far, are copying and pasting each Keys setting (transforming the variable name into a string for the key) into their plugin code.
For those unfamiliar with resource filtering, it means that the build tool should be able to resolve all tokens present in a resource file and at compile time place the resource file under target after substituting the token key with its value (example: resource file is "/User/me/Documents/myproject/src/test/resources/myfile.txt", in which text has a string ${target}, where "target" is the key and "/User/me/Documents/myproject/target" happens to be its value in that specific build).
In the end, for doing token resolution within the plugin code, you need a map Map[String, String] of all build settings present in Keys, i.e. key is "baseDirectory" and the value is whatever at compile time that value is.
sbt internally holds on to these information, so you can query them.
For example, there's a command in sbt called inspect that tells you the current value for the setting, and all other dependency that it uses. A while back I wrote a plugin that calls it recursively and prints it out in ASCII art tree, called sbt-inspectr: https://github.com/eed3si9n/sbt-inspectr.
There's also Project.runTask(...), which you might need.
I am using play framework v2.3. The problem I am facing is that any change in html and refreshing browser causes recompilation of the complete code. Can I avoid this?
Twirl templates are compiled, as stated by the docs:
Templates are compiled as standard Scala functions, following a simple naming convention. If you create a views/Application/index.scala.html template file, it will generate a views.html.Application.index class that has an apply() method.
There is no way to disable this behavior because it works this way by design. My suggestion here is use ~ (tilde) before SBT commands so things will happen as you save the file, per instance:
sbt ~run
This will recompile the changed file (and possible others), every time you change and save it. Also, sbt has some options that can possibly help you here: withNameHashing.
See sbt docs to understand how it works. To enable it, add the following line to your build.sbt file:
incOptions := incOptions.value.withNameHashing(nameHashing = true)
I have use semantic-analyze-proto-impl-toggle to switch between the function's proto and impl, but when I use this feature it always doesn't do anything except saying that it can't find correspongding implement, the other feature like name completion is OK.Can anyone help me on this issue? And I really eager to know whether the semantic only parse the current buffer and the header files in include path, not parsing other implement files. I mean that whether semantic parses all the file in the project when it try to find the implement of a function.
The proto/impl toggle will look for symbols in all the files of your project that have been parsed so far. It runs into trouble when the sources don't have the right includes and you try to jump between methods. There is an explanation, with a hacky work-around patch on the mailing list here:
http://sourceforge.net/mailarchive/forum.php?thread_name=4FDBF890.8010505%40siege-engine.com&forum_name=cedet-devel
I have a workspace built using MS-Visual Studio 2005 with all C code.In that i see many functions which are not called but they are still compiled(they are not under any compile time macro to disable them from compiling).
I set following optimization settings for the MS-VS2005 project to remove that unused code:-
Optimization level - /Ox
Enable whole program optimization - /GL
I tried both Favor speed /Ot and Favor Size /Os
Inspite of all these options, when i see the linker generated map file, I see the symbols(unsed functions) names present in the map file.
Am I missing something? I want to completely remove the unused code.
How do I do this?
The compiler compiles C files one-at-a-time. Therefore, while compiling a C-file that does contains an unused function, the compiler cannot be sure that it will not be called from another file and hence it will compile that function too. However, if that function were declared as static (file-scope), then the compiler would know it is not used and hence remove it.
Even with whole program optimization, I think it would still not be done since the compilation could be for a library.
Linkers do something similar to what you are looking for. If your code links against a library containing multiple objects, then any objects that do not contain functions used by your code (directly or indirectly) would not be included in the final executable.
One option would be to separate your code into individual libraries and object files.
PS - This is just my guess. The behavior of the compiler (with whole program optimization) or linker essentially depends on the design choices of that particular compiler or linker
On our projects we have a flag set under the project properties\Linker\Refrences. We set it to Eliminate Unreferenced Data (/OPT:REF), according to the description this is supposed to remove function calls or data that are never used. I am just going by the description, I have never tested this or worked with it. But I just happened to see it within the last hour and figured it might be something you could try.