Displaying EUnit code coverage in Emacs - emacs

I am using Rebar to build my erlang project and want to integrate it more tightly with Emacs. I found that if I add {cover_print_enabled, true}. to my rebar config file I get code coverage in the build output.
However there is also an option cover_export_enabled which outputs a binary file of some form. Is there an emacs plugin to parse that file and color code my code to show what code is covered by tests?
I really don't like having to switch to a browser to see code coverage.

As far as I know, there is no such plugin.
The exported cover data file can be read as follows:
Read one byte, giving the length of the next term; let's call it N.
Read N bytes in Erlang binary term format. This can be decoded with binary_to_term/1.
If the term from step 2 is of the form {'$size',X}, then read X bytes and decode as a term. (This happens when the binary representation of the term is longer than 255 bytes.)
Continue from step 1, until end of file.
Distel has an Emacs Lisp implementation of binary_to_term called erlext-read-obj in erlext.el.
I haven't looked into what to do with the terms in the file, once decoded, but hopefully this is enough to get someone started. Read lib/tools/src/cover.erl if in doubt.

Just added this feature to rebar.el in commit https://github.com/leoliu/rebar.el/commit/9ba8699ff6310721226b93341e62491ebfd0ee99
Leo

Related

Add the hash of the code in executable file

I have an STM32 application which uses two blocks of memory. In 0th block, I have a boot code (which runs just after power-on) and in 7th block, I have an application code (which may or may not run depending on the authorization decision given by the boot code).
Those two codes are developed hence generated by two separate projects. They are flashed on the specific blocks (boot code to 0th block and application code to 7th block) of STM32 NOR memory using openocd tool by giving an offset value to the openocd's write_image command.
What I would like to do basically in the boot code is that I want to calculate the hash of the application code and compare it with the reference digest. If they are equal, I will give the hand to the application code. For that, after I generate the executable (can be in elf, hex or bin format) of the application code, I want to:
Create another file (in any format listed above) which has 128K byte size
Copy the content of the executable file to the recently created file from its beginning (0 offset)
Write the hash of the executable to the last 32 bytes of the recently created file
Fill the gap with 0xFF
Finally flash this executable file (if it is still) to the 7th block of the memory
Do you think that it is doable and feasible? If so:
Which format should I use to generate the executable?
Do I have something that I need to give specific attention to achieve this?
Lastly, do you think that it makes sense to do that or is there any other more standard way for this purpose?
Thanks a lot in advance.
You just need to add an additional step to your building sequence. After the linking extract the binary file from elf
Then write a program in your favourite programming language which will calculate something and append the result to that bin file

Two closely matching files: get corresponding lines?

I'm in a situation where I'm programmatically generating LaTeX code, and I want my Synctex to point to the correct lines in the original file.
The generation is basically doing template expansion, so the original files are nearly identical to the generated ones, but with some snippets expanded.
I'm wondering, is there a diff tool or library that will easily give me the line number of the original file that corresponds to a given line in the generated one? Can this be extracted from a normal Unix diff somehow?
This is part of a build script, so ideally something easy to run, like bash or python, is preferred to something that needs to be compiled.
Google’s diff-match-patch lib is a neat solution to questions like these: https://github.com/google/diff-match-patch

Cond statement doxygen does not work

I am trying to separate out internal and external documentation using the doxygen constructs of cond; but i just cant seem to get get it working. I would essentially like to exclude some files completely and not conditionally. Regardless of where i add the tag (before include, before header guards etc) , the files and source both show up.
What i have tried in vain is to take the test file from doxegen repo for
conditional test and add it to the project.
Steps to reproduce [Linux]
create a new directory.
copy paste the above file (had to rename it to .h as .c was passed over?).
generate dummy config via doxygen -g.
update Doxyfile ENABLED_SECTION = COND_ENABLED.
Run doxygen.
check html/index.html
This however is still visible in the html documentation it generates for the project. I have set the ENABLED_SECTION variable with other values , but cond_enabled function still shows up. Running the testing directory of the project (doxygen) it passes. So i am lost.
Any suggestions?
Tried with latest version 1.8.14.
Thanks!
Regarding the \cond problems (not an answer directly to the real problem you face, I think, but to long for a comment).
The mentioned file is used in the, limited, testing doxygen can do / does and the first lines contain some instructions on what to do. Furthermore there is a default Doxyfile with the tests in use. It is hard to run a separate test outside the doxygen build tree.
Regarding the remark "Running the testing directory of the project (doxygen) it passes." This is correct, here, at the moment, only testing is done against the XML output and the generated output is compared to a once created version of the XML output. No tests are done, at the moment, in respect to HTML or PDF / LaTeX. Recently the test framework has been slightly extended so in the future this should be possible (compare the xhtml and tex output, but some work has still to be done here).
The version of the parser sees the \cond in the first line (normal C comment) as a doxygen command and skips everything till the first \endcond (your friend in these cases is always doxygen -d preprocessor). I think that removing / modifying the first line will result in an already better result. There is however another hiccup for e.g. HTML output. As the function cond_enabled is not documented and EXPAND_ALL is not set to YES the function will not appear in the documentation. So best is also to add a line of documentation with the function cond_enabled.
Regarding the seen HTML problems I modified the the relevant test in doxygen slightly and pushed a proposed patch to github (pull request 714, https://github.com/doxygen/doxygen/pull/714).
Note: the problem of skipping the \cond in normal C comment is quite a bit harder to implement (seen the logical complexity of the doxygen code in pre.l and commentcnv.l.
EDIT: 2018/06/10: The push request has been integrated in the master version on github.

How to produce a .js file from a haskell source file with haste?

So I noticed, while answering this question, that the one who asked the question appears to be a javascript developer. And as the code I wrote in haskell is easy enough, I thought I give haste a try and try to compile it to javascript.
So, I downloaded the Windows binary package of haste (why does the .msi require a reboot?!!?), added it to my path, issued haste-cabal update and haste-cabal install split and after a bit of reading the output of hastec --help, I issued:
PS E:\h\stackoverflow> hastec -o hexagon.js --pretty-print hexagon.hs
as my best guess on how to get the output I am looking for.
Opposite to my expectation, haste output was this:
hastec.exe: user error (shell expression failed in readModule: Data.Binary.Get.runGet at position 8: not enough bytes)
So, my question: What do I have to do to get a java script source file?
Is it possible that you have an old version of Haste lying around, or have intermediate files (.jsmod, for instance) from a different version of the compiler in your source directory? This sounds like the (quite unhelpful) error message Haste produces when it runs into a corrupted intermediate file.
Check that the version of the binary you're calling is what you expect (hastec --version). Then, try getting rid of all intermediate files in the directory as well as any files in %USERPROFILE%\AppData\Roaming\haste, reinstalling split, and recompiling with the -fforce-recomp flag. You should also add a main function, so that Haste has an entry point to your program from which to start linking. If all you want to do is to make some Haskell function available to external JavaScript, you can use the export foreign function interface:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Haste.Foreign
import Hexagon
main = export "picture" Hexagon.picture
You will probably also want to compile your program with the --onexec flag, to make sure that main runs and exports picture immediately when loaded, and not on page load which is the default:
> hastec -o hexagon.js --pretty-print --onexec hexagon.hs
After doing this, any code included after hexagon.js will be able to call e.g. Haste.picture(5); in order to produce a picture of size 5.
(Re: MSI installer requiring a reboot, this is required since it adds the Haste binaries to your %PATH%, which does not take effect immediately. I assume that a re-login would be enough to make it take effect, however.)

Siemens S5 to S7 XR.INI file

I'm trying to convert a program from S5 to S7. I've taken a look into the tutorials and documents that are available. But I cant seem to find any information about this problem:
The program is pretty complicated and I should need the XR.INI file to convert it with the S5 Converter program that came with Simatic S7. When I open the folder with the project I see all the files I need and that are described in the tutorial made by Siemens.
.ST.S5D
.XR.INI
.Z0.SEQ
I start the converter and open .ST.S5D. When I did this I get the error there is no .XR.INI file. But I can see it in the folder. Its called CERMST.INI. So I guess the name isnt correct?
In PG2000 I remade the XRF with CTRL Q but nothing changed. I also tried to rename the file to CERM.XR.INI because I saw somewhere these files can only be 6 characters long (not sure if this is true).
I tried first without the XRF. There were some errors because of the ASM command. In the tutorial I saw the converter cant handle these so I removed these functions from the program to rewrite manually later on. After this I had an error because I had no XRF. Error was:
Error No PLC type description found for called or addressed block
So I really need the .XR.INI file but the program can't find it, but its clearly in the folder. How can I fix this? How can I make a new .INI file?
Link to tutorial: http://www.sankotek.com/TR/Dokumanlar/STEP%207%20-%20From%20S5%20to%20S7.pdf
Thanks in advance!
Siemens S5 file extensions used also last two letters before file extension to mark file type, like xxxxxxXX.XXX where x is file name and X is file type for S5, without second dot in the middle like you have. File name length limit exists, I think it comes from DOS or some older operating system file length limit.
If you don't have XRF file then if i remember correctly compiler does not have compile order. Some blocks have not been compiled but are referenced in caller block that the compiler tries to compile. Compiler does not find called block in compiled blocks folder and wham... fatal error. When i had such troubles i compiled called blocks independenly before calling blocks. So you can do without XRF file. But try to include Z0.SEQ, that file has assignment list so you get nice variable names instead of just variable address numbers.
Also, stackoverflow is not the correct place to ask questions about Siemens S5-S7 conversion. You should post your questions in http://www.plctalk.net, they have guys there who used to program S5 plc-s.