How to specify scalacOptions individually for files - scala

I'd like to specify for a file compiler options that differs from entire projects. To be more concrete, there is some debugging that I need to enable to figure out why macros fails in that particular file. If I change the options globally, then the entire project would be recompiled (and produce debug), which I want to avoid.
How am I supposed to do that with sbt?

Using the SBT's incremental compilation feature, you can have the following workaround:
Enable this compiler option for the whole project,
Compile the whole project (you'll get a lot of output which you can ignore for now)
Execute touch <file with macro>.scala from console, or do some other modification in that file only.
Compile again.
Now only single file of interest (and possibly files that depend on it) will be recompiled, and there'll be much less debug output.
The above implies your code compiles successfully. If it doesn't you need to bring it to state when it does compile successfully first (e.g. by removing some code that fails the compilation in the file you're interested in), and then go to step 3 from above.

It is not possible to give special compilation parameters that applies only to some files of a compilation unit.
Macros are difficult to debug. Now the possible solutions are:
Debug the code
println in the macro's code, sbt clean compile and you should see your prints in the console (It also works in Idea).
Good luck

Related

coqIDE is not properly connecting files in project and is not compiling

I'm new to using coq/coqIDE and I'm not computer savvy either so I don't know whats wrong or what to call the issue. I was trying to go through the Software Foundations book, but coqIDE isn't working right. I'm using the latest windows 10, and coqIDE 8.10.2
The first issue is when I go to the tab compile -> compile buffer in Basics.v, coqIDE doesn't create a .vo file or a .glob. None of the other buttons worked either. Running coqIDE as admin didn't make it work either, but I figured out I can get around this by manually dragging Basics.v into the coqc application file.
I had no issues with coq working during the first lesson, but in the next lesson we're meant to import the definitions from Basics.v into Induction.v, when I run what they say
From LF Require Export Basics.
I get the error The file C:\Users\...\Coq Files\Tutorial\lf\Basics.vo contains library Basics and not library LF.Basics even though the _CoqProject file contains "-Q . LF" as it should.
I can get around this error too by just writing "Require Export Basics."
which properly loads, up until I actually try calling a definition from Basics
Running
Require Export Basics.
Example example: evenb 2 = true.
works until I get to evenb, and then gives the error
The reference evenb was not found in the current environment. even though it's in Basics.v
If I get even more anal and try
Add LoadPath "C:\Users\...\Coq Files\Tutorial\lf".
From LF Require Export Basics.
I get the error
Cannot find a physical path bound to logical path matching suffix <> and prefix LF.
And then finally if I try
Add LoadPath "C:\Users\...\Coq Files\Tutorial\lf".
Require Export Basics.
Example example: evenb 2 = true.
Loads properly.
So I'm wondering how should I fix the load path so that the project works without putting that junk in every file and how do I make the compile tab work.
There were some people talking about "hitting make in the top-level" but I have no idea what that means. I tried it anyway and ran the Makefile as a .bat even though I already downloaded it properly so there shouldn't be any need, but the Makefile didn't change anything anyway.
I don't think I'm forgetting anything, thanks in advance.
Instead of choosing Compile > Compile buffer, try Compile > Make instead (when browsing any one of the vernacular files in Logical Foundations) - I think that is what others meant by "hitting make in the top-level". But first, you may want to remove the workarounds you added in e.g. Induction.v, and save a trivial change in Basics.v such as adding/removing a newline somewhere in order to convince make to recompile it.

Passing macros with coverity exe files while using them through cmd

I am new to Coverity,I am using it from the command prompt with it's .exe files.So I want to pass specific macros in coverity cov-build.exe so that those macros will be implemented when cov-emit.exe(when it is called by cov-build.exe) is parsing the .c files.Till now I have tried the below stated configurations.
code-build.exe Intermediate_folder --delete-stale-tus --preprocessor-first --return-emit-failure "My_bat_file" -- -D My_macro_name=my_macro_body
So any help will be much be appreciated.I am stuck on this.
Thanks and regards,
Newbie_in
cov-build wraps your existing build command, monitors it and spawns parallel compiler invocations in order to understand your code. These parallel compiler invocations will see the same command line being passed to your own compiler.
So if you want this define to take effect for your compiler as well as Coverity's then you should simply just add it to your build the way you would normally and Coverity will see it.
If you want to add a define that only Coverity's compiler can see, this is best done with within the config for your compiler.
You can either edit the config directly (add
<append_arg>-Dmy_macro_name=my_macro_body</append_arg>
after the <begin_command_line_config> line), or re-configure using --xml-option.
For example, if you're using the shortcut gcc config this would look like this:
$ cov-configure --gcc --xml-option=append_arg>-Dmy_macro_name=my_macro_body.
I noticed you're using --preprocess-first on the cov-build command line - I recommend against this, as it destroys XREFs making it much more difficult to browse defect information, as well as makes the analysis unable to find some defects (i.e. ones that are due to macros). --preprocess-next behaves like --preprocess-first and will only fire if the initial compilation attempt fails, so if you're using --preprocess-first to work around compilation issues, I strongly recommend using --preprocess-next instead.
If you do have compilation issues, it's always good to report them (along with a reproducer) to Coverity support so that they can be fixed in future releases.

Matlab - Compile Program w\ Separate Functions

I wrote a GUI program which makes use of separate m-functions. What I have done thus far was to add the paths of the folders where the sub-functions are stored on startup, and remove those paths on close.
Everything works fine, but when I tried to compile the project, I got an error message (attached) which suggests that ADDPATH is not a function that can be used in a standalone app.
Is there a way to overcome with w\o any major changes in the program? Maybe some way to include those functions w\o ADDPATH?

Scala Dynamic Tracing

I'm solving a problem about scala tracing.
What I want to do is get the environment after executing each line of a scala file, which means, for a program in scala, I could know which line it is executing now and the variables existing now, for every line.
Now I'm using the method of stepping. I let the program step into or step return automatically(by editting the scala IDE in Eclipse) for every line, and then I could get what we want.
But for a long program, it's very slow and will cost a large amount of memory, more than 20GB!
So do you have any better idea about how to achieve it?Give me the whole trace in source code format for every line of a program, with source file path, line number and current variables.
Thanks!
AFAIK there is no free/opensource tools for the task you described, only commercial ones:
Chronon, but looks like that only Java is supported.
Jidebug, I also not sure about Scala.
They both can record runtime traces of your JVM code and later you can drill into these to understand what has happened there.

Racket Interactive vs Compiled Performance

Whether or not I compile a Racket program seems to make no difference to the runtime performance.
Is it just the loading of the file initially that is improved by compilation? In other words, does running racket src.rkt do a jit compilation on the fly, which is why I see no difference in compiling vs interactive?
Even for tight loops of integer arithmetic, where I thought some difference would occur, the profile times are equivalent whether or not I previously did a raco make.
Am I missing something simple?
PS, I notice that I can run racket against the source file (.rkt) or .zo file. Does racket automatically use the .zo if one is found that corresponds to the .rkt file, or does the .zo file need to be used explicitly? Either way, it makes no difference to the performance numbers I'm seeing.
Yes, you're right.
Racket compiles code in two stages: first, the code is compiled into bytecode form, and then when it runs it gets jitted into machine code. When you compile a file, you're basically creating the bytecode which saves on re-compiling it later. Since that's usually not something that takes a lot of time for small pieces of code, you won't see any noticeable difference in runtimes. For an extreme example, you can delete all *.zo files in the collection tree and start DrRacket -- it will take a lot of time to start since there's a ton of code, but once it does start, it would run almost as usual. (It would also be slow to click "run" since that will reload and recompile some files.) Another concern for bigger pieces of code is that the compilation process can make memory consumption higher, but that's also not an issue with smaller pieces of code.
See also the Performace chapter in the guide for hints on how to improve performance.
Racket will always compile your code, regardless of whether it is run interactively at the REPL or run from the command-line. Here is the section in the guide that explains it. In interactive mode, the compiler turns every expression/definition into bytecode in memory and executes that. Otherwise, the compilers outputs the bytecode to zo files.
Note: Eli replied at the same time I did. See his response for more details.