Scala Dynamic Tracing - scala

I'm solving a problem about scala tracing.
What I want to do is get the environment after executing each line of a scala file, which means, for a program in scala, I could know which line it is executing now and the variables existing now, for every line.
Now I'm using the method of stepping. I let the program step into or step return automatically(by editting the scala IDE in Eclipse) for every line, and then I could get what we want.
But for a long program, it's very slow and will cost a large amount of memory, more than 20GB!
So do you have any better idea about how to achieve it?Give me the whole trace in source code format for every line of a program, with source file path, line number and current variables.
Thanks!

AFAIK there is no free/opensource tools for the task you described, only commercial ones:
Chronon, but looks like that only Java is supported.
Jidebug, I also not sure about Scala.
They both can record runtime traces of your JVM code and later you can drill into these to understand what has happened there.

Related

How could I run a single line of code (not script) from command prompt?

Simple question here, just can't seem to pass it google in a way it can understand.
Say I wanted to execute a line of actual programming code (c++ or java or python... etc) like SetCursorPos or printf from the command prompt command line. I vaguely imagine I would have to invoke the compiler and pass the command to it like a parameter, where from it would then be converted into machine language and passed to... where exactly?
Okay so that was kind of two questions.
How to run actual code from the command line and
what exactly is happening when a fully compiled program, or converted line of code (presuming these are essentially binary containers at that point), is executed?
Question one takes priority obviously. Unfortunately, I can not find any documentation on it, just a bunch of stuff vaguely related to it.
How to run actual code from the command line
Without delving into the vast amounts of blurriness between them, there are two major categories of language implementations: interpreters and compilers.
With many interpreters (or implementations with implicit compilation, such as V8 JavaScript's jit compiler, or pretty much anything with a repl), running a single line from the command line should be fairly trivial. CPython (the standard implementation of Python) has the -c command option:
$ python -c 'print("Hello, world!")'
Hello, world!
Language implementations with explicit compilation steps will tend to be decidedly less simple. In particular, the compiler would need to either accept source either from directly out of the argument list, or from standard input (via piping or redirection). On the output side, your compiler would have to support immediately executing that program, or outputting it to standard out, so that an operating system feature (if it exists) can execute it from a pipe.
To my knowledge, most explicit compilers are not designed with such usage in mind. In such cases, your best bet is to see if there is a REPL available for the language in question, preferably one as compatible with your compiler as possible, or to create (or find) a wrapper that makes it look like your language has a REPL. The wrapper would:
Accept input along the lines of CPython above.
Create a temporary source file behind the scenes with the code to be run and any necessary boilerplate.
Pass that file to the compiler.
Automatically run the resulting executable.
Delete the source file and executable. These may be cleaned up by the operating system later instead, if they're in a temp directory.
From the point of view of the user, this should look pretty similar to the CPython example, as they wouldn't have to interact with or see the compiler or temporary files.

Passing macros with coverity exe files while using them through cmd

I am new to Coverity,I am using it from the command prompt with it's .exe files.So I want to pass specific macros in coverity cov-build.exe so that those macros will be implemented when cov-emit.exe(when it is called by cov-build.exe) is parsing the .c files.Till now I have tried the below stated configurations.
code-build.exe Intermediate_folder --delete-stale-tus --preprocessor-first --return-emit-failure "My_bat_file" -- -D My_macro_name=my_macro_body
So any help will be much be appreciated.I am stuck on this.
Thanks and regards,
Newbie_in
cov-build wraps your existing build command, monitors it and spawns parallel compiler invocations in order to understand your code. These parallel compiler invocations will see the same command line being passed to your own compiler.
So if you want this define to take effect for your compiler as well as Coverity's then you should simply just add it to your build the way you would normally and Coverity will see it.
If you want to add a define that only Coverity's compiler can see, this is best done with within the config for your compiler.
You can either edit the config directly (add
<append_arg>-Dmy_macro_name=my_macro_body</append_arg>
after the <begin_command_line_config> line), or re-configure using --xml-option.
For example, if you're using the shortcut gcc config this would look like this:
$ cov-configure --gcc --xml-option=append_arg>-Dmy_macro_name=my_macro_body.
I noticed you're using --preprocess-first on the cov-build command line - I recommend against this, as it destroys XREFs making it much more difficult to browse defect information, as well as makes the analysis unable to find some defects (i.e. ones that are due to macros). --preprocess-next behaves like --preprocess-first and will only fire if the initial compilation attempt fails, so if you're using --preprocess-first to work around compilation issues, I strongly recommend using --preprocess-next instead.
If you do have compilation issues, it's always good to report them (along with a reproducer) to Coverity support so that they can be fixed in future releases.

Is there a quick way to show the code of a method declared in the Scala Console?

I frequently use the Scala console to evaluate and test code before I actually write it down in my project. If I want to know the contents of a variable, I can just enter it and scala evaluates it. But is there also a way to show the code of methods I entered?
I know there's the UP-key to show single lines, but what I was searching for is to show the whole code at once.
There's a file in your home directory named .scala_history that contains all of your recent REPL history. I regularly copy and paste code from this file into project source files. It's not exactly the same as showing the code for individual methods in the REPL, but it might help you accomplish the same goals.
See the comments by Paul Phillips in this issue for a discussion of some related functionality in the REPL (grouping statements in the history):
At some point I implemented the logic for this, but the real obstacle
is jline. It has enough trouble figuring out where the cursor is under
the simplest conditions. Start throwing big multiline blocks into the
history and it breaks down in tears. Would love to see this and
SI-2547 addressed by the community.
...
I expect to fix this soon too, but it depends on how well the recent
jline work goes. I implemented it long ago, and display issues are the
impediment.
Both of these comments are over two years old, so I wouldn't hold your breath.
I dont know a command to load all the code from command line.
What you can do is to :load path/to/my/file.scala to load some complex code and re- :load it when you changed the code in the file.

Racket Interactive vs Compiled Performance

Whether or not I compile a Racket program seems to make no difference to the runtime performance.
Is it just the loading of the file initially that is improved by compilation? In other words, does running racket src.rkt do a jit compilation on the fly, which is why I see no difference in compiling vs interactive?
Even for tight loops of integer arithmetic, where I thought some difference would occur, the profile times are equivalent whether or not I previously did a raco make.
Am I missing something simple?
PS, I notice that I can run racket against the source file (.rkt) or .zo file. Does racket automatically use the .zo if one is found that corresponds to the .rkt file, or does the .zo file need to be used explicitly? Either way, it makes no difference to the performance numbers I'm seeing.
Yes, you're right.
Racket compiles code in two stages: first, the code is compiled into bytecode form, and then when it runs it gets jitted into machine code. When you compile a file, you're basically creating the bytecode which saves on re-compiling it later. Since that's usually not something that takes a lot of time for small pieces of code, you won't see any noticeable difference in runtimes. For an extreme example, you can delete all *.zo files in the collection tree and start DrRacket -- it will take a lot of time to start since there's a ton of code, but once it does start, it would run almost as usual. (It would also be slow to click "run" since that will reload and recompile some files.) Another concern for bigger pieces of code is that the compilation process can make memory consumption higher, but that's also not an issue with smaller pieces of code.
See also the Performace chapter in the guide for hints on how to improve performance.
Racket will always compile your code, regardless of whether it is run interactively at the REPL or run from the command-line. Here is the section in the guide that explains it. In interactive mode, the compiler turns every expression/definition into bytecode in memory and executes that. Otherwise, the compilers outputs the bytecode to zo files.
Note: Eli replied at the same time I did. See his response for more details.

Matlab code after compilation

I am totally a newbie in Matlab
I want to ask that when we write a program in Matlab software or IDE and save it with a
.m (dot m) file and then compile and execute it, then that .m (dot m) file is converted into which file? I want to know this because i heard that matlab is platform independent and i did google this but i got converting matlab file to C, C++ etc
Sorry for the silly question and thanks in advance.
Matlab is an interpreted language. So in most cases there is no persistent intermediate form. However, there is an encrypted intermediate form called pcode and there are also the MATLAB compiler and MATLAB coder which delivers code in other high level languages such as C.
edit:
pcode is not generated automatically and should be platform/version independent. But it's major purpose is to encrypt the code, not to compile it (although, it does some partial compilation). To use pcode, you still need the MATLAB environment installed, so in many ways it acts like interpreted code.
But from your follow-up question I guess you don't quite understand how MATLAB works. The code gets interpreted (although with a bit of Just-In-Time Compilation), so there is no need for a persistent intermediate code file: the actual data structures representing your code are maintained by MATLAB. In contrast to compiled languages, where your development cycle is something like "write code, compile & link, execute", the compilation (actually: interpretation) step is part of the execution, so you end up with "write code, execute" in most of the cases.
Just to give you some intuitive understanding of the difference between a compiler and an interpreter. A compiler translates a high level language to a lower level language (let's say machine code that can be executed by your computer). Afterwards that compiled code (most likely stored in a file) is executed by your computer. An interpreter on the other hand, interprets your high level code piece by piece, determining what machine code corresponds to your high level code during the runtime of the program and immediately executes that machine code. So there is no real need to have a machine code equivalent of your entire program available (so in many cases an interpreter will not store the complete machine code, as that is just wasted effort and space).
You could look at interpretation more or less as a human would interpret code: when you try to manually determine the output of some code, you follow the calculations line by line and keep track of your results. You don't generally translate that entire code into some different form and afterwards execute that code. And since you don't translate the code entirely, there is no need to persistently store the intermediate form.
As I said above: you can use other tools such as MATLAB coder to convert your MATLAB code to other high languages such as C/C++, or you can use the MATLAB compiler to compile your code to executable form that depends on some runtime libraries. But those are only used in very specific cases (e.g. when you have to deploy a MATLAB application on computers/embedded devices without MATLAB, when you need to improve performance of your code, ...)
note: My explanation about compilers and interpreters is a quick comparison of the archetypal interpreter and compiler. Many real-life cases are somewhere in between, e.g. Java generally compiles to (JVM) bytecode which is then interpreted by the JVM and something similar can be said about the .NET languages and its CLR.
Since MATLAB is an interpreter, you can write code and just execute it from the IDE, without compilation.
If you want to deploy your program, you can use the MATLAB compiler to create an stand-alone executable or a shared library that you can use in a C++ project. On Windows, MATLAB code would compile to an .EXE file or a .DLL file, respectively.