Can't switch between proto and implement in C++ project - emacs

I have use semantic-analyze-proto-impl-toggle to switch between the function's proto and impl, but when I use this feature it always doesn't do anything except saying that it can't find correspongding implement, the other feature like name completion is OK.Can anyone help me on this issue? And I really eager to know whether the semantic only parse the current buffer and the header files in include path, not parsing other implement files. I mean that whether semantic parses all the file in the project when it try to find the implement of a function.

The proto/impl toggle will look for symbols in all the files of your project that have been parsed so far. It runs into trouble when the sources don't have the right includes and you try to jump between methods. There is an explanation, with a hacky work-around patch on the mailing list here:
http://sourceforge.net/mailarchive/forum.php?thread_name=4FDBF890.8010505%40siege-engine.com&forum_name=cedet-devel

Related

Is there a generally specification for references within a file?

I wanted writing an IDE plugin to simplify work with references to code in my documentation, but decided to first check the existence of a specification (like URI) so as not to reinvent the wheel.
I tried to find the answer to this on the internet but failed.
We know that such references work in different IDEs:
appRoot.folder.file.Class#method
appRoot.folder.file:35
appRoot/fodler/file.ext:35
But what if you need to get the path to the nested property JSON or XML?
I naively imagine that the sign # means the root and then for the JSON path I could use something like this:
appRoot/folder/jsonFile.json#oneLevelProp.twoLevelProp
But this of course does not work at the current moment in any IDE that I know of.

MPI Autocompletion in Visual Studio Code

I'm trying to use Visual Studio Code to develop Fortran MPI programs. However, while I can successfully build and run them just fine, it would be very helpful for me if I can use intellisense/autocompletion features for MPI (as well as other external modules). I have /usr/lib/openmpi/ (which contains mpi_f08.mod) as part of fortran.includePaths in my settings.json. However, when I use mpi_f08, I get the problem message from VS Code Module "mpi_f08" not found in project. Here is a minimal CMake build example:
! hello.f90
program hello
use mpi_f08
implicit none
integer :: ierror, nproc, my_rank
call MPI_Init()
call MPI_Comm_size(MPI_COMM_WORLD, nproc, ierror)
call MPI_Comm_rank(MPI_COMM_WORLD, my_rank, ierror)
print*, "hello from rank ", my_rank
call MPI_Finalize()
end program hello
# CMakeLists.txt
cmake_minimum_required(VERSION 3.12)
project(hello_mpi)
enable_language(Fortran)
find_package(MPI REQUIRED)
add_executable(hello_mpi hello.f90)
include_directories(${MPI_Fortran_INCLUDE_PATH})
target_link_libraries(hello_mpi PUBLIC ${MPI_Fortran_LIBRARIES})
I would like to be able to (i) get rid of the warning/message and more importantly (ii) enable suggestions from MPI when I press CTRL+space as it would if I was calling from an internal module.
I'll post a partial answer since it's better than nothing, hopefully this helps someone else and/or enables someone else to answer my question fully.
It seems the issue relates to the Fortran language server, which can be configured by adding a .fortls JSON file, as explained on its Github README: https://github.com/hansec/fortran-language-server
I added the following, which allowed it to find not only local modules but also MPI (and the external module json-fortran):
{
"source_dirs": ["src", "."],
"ext_source_dirs": [
"/path/to/json-fortran/src",
"/path/to/openmpi-4.1.2/ompi/mpi/fortran/use-mpi-f08",
]
}
This doesn't capture all functions in json-fortran, which I think is because of its .inc files, as it doesn't give me function pointers like json_file::get at autocomplete.
As for MPI, this kind of works, as it gives me all the functions I can think of needing, but with _f08 appended to the end of it. I don't know the inner workings of OpenMPI but I guess e.g. MPI_Init wraps MPI_Init_f08 for reasons of backward compatibility. For now I can simply autocomplete to the _f08 version and remove that bit manually. (I also tried adding openmpi-4.1.2/ompi/mpi/fortran/use-mpi-tkr and openmpi-4.1.2/ompi/mpi/fortran/mpif.h but no luck).
Would be nice to get this detail sorted though. It is also mildly annoying that I must manually include the source dirs now (removing it makes it not find local modules).

Removing annotations from a Modelica model

I'm developing a Modelica library and need to produce a document with source code listings. I'd like to be able to include the source of the Modelica models without annotations.
I could manually edit them out, but I'm looking for a more automated strategy. I'm guessing the most convenient and straightforward approach is to use some tool to save .mo files with no annotations and include those in my document (I'm using \lstinputlisting in LaTeX).
Is it possible to do this? I have access to Dymola, OpenModelica and JModelica. Dymola is obviously capable of producing such a listing, as it's able to include it in the automatically generated documentation (File > Export > HTML...). I've been looking into scripting with Dymola and OpenModelica, but haven't found a way to do this either.
JModelica seems like it could be a good option, but I don't have experience working with Python. If this is possible and someone gives me some pointers, I'm willing to look into it myself. I found a mention to a prettyprint function that might do the job, but I'm not sure where to start. I can't even find reference to that function in the latest documentation.
It would also be more convenient for me to find a way of doing it with Dymola/OpenModelica (whether through the UI or by using a script). Have I missed something?
I think you could use saveTotalModel("total.mo", MyModelName) in OpenModelica. This will strip most annotations (not ones used for code generation if I remember correctly) and pretty-print the source code including all dependencies. Then you just copy-paste the models/packages that you want to include in the listing. Or if you prefer, you can do something like the following to only include code for a particular model:
loadModel(Modelica);
loadFile("MyModel.mo");
saveTotalModel("total.mo", MyModel.A.B);
clear();
loadFile(MyModel);
str := list(MyModel.A.B);
writeFile("MyModel.A.B.listing", str);

Turn off Scala lint warnings at a less-than-global scope?

I'd like to be able to turn off a particular compiler warning, but only for a single file in my project. Is this possible?
The context is that I have a single source file that makes calls to an external macro library that produces adapted-arg warnings. I found that I can eliminate these warnings by changing my build.sbt file:
scalacOptions ++= Seq("-Xlint:-adapted-args,_" /*, ... */)
However, this turns off the warning globally, and I only want it off for the single file that raises them.
I haven't had any luck searching for any of the following possible solutions I thought might exist:
Specifying separate compilation options for different files in my project in my build.sbt file
Providing some pragma-like comment in my source file to change the warnings generated by the compiler, similar to the special scalastyle:on/off comments recognized by Scalastyle
Some annotation for smaller regions of code, like #unchecked
So, is there any way to have different linting options in effect for different files, or even for limited regions of code?
The expectation was that folks would write a custom Reporter that would filter out undesirable warnings.
It's easy to write one that filters by file name, perhaps given a whitelist.
The error/warn API supplies the textual Position. It would also be easy to parse the position.lineContent for a magic comment token like IGNORE or SUPPRESS, which is not as convenient as a SuppressWarnings annotation but is easy to implement.
The compiler asks the reporter if there were errors.
The custom reporter is specified with -Xreporter myclass, or it wouldn't surprise me if someone has written an sbt plugin.

In a project using GNU Autotools, is there a task to launch xgettext?

Summary :
I have a project using GNU Autotools. I have a pot file. I need to update it. Is there a magical "make" task that run xgettext for me (I'm lazy ?)
Verbose version :
Hi
I am trying to setup a project using GNU autotools and gettext.
I'm trying to follow the 'lazy' path (that is, only writing configure.ac, Makefile.am, and such, and let tools generate the rest for me as much as possible).
I used gettextize once on my package, so I got a package.pot file created, and I derived a fr.po file (I'm trying to translate in french).
I never managed to get my code translated, but I figured out it might be because the code was not in the proper place. The translated string is in a lib instead of a main, and the documentation is quite unclear about what I must do in this case. If my main call a function in a lib, and the function from the lib is using _(). Should I use gettext of dgettext in this case ? My lib is just here for organisation purpose, so I'm okay with using the same domain (only one package.pot file for the whole app).
So, to try something simpler, I moved my string to the main (it's really just a hello world, for the moment). So I need to update the package.pot file, at least, to realize that the string position changed, need I ? In this case, would I use xgettext manually (painfully passing it the list of all interesting cpp files, which will be a pain in the ass when I have more than one file), or is there a 'make whatever' task somewhere that I can run ?
This may look stupid, but I've not been able to find it.
Also, any help on finding why my code is not translated, (anything not in http://www.gnu.org/software/gettext/FAQ.html#integrating_noop) is welcome !
Thanks
PH
Ok, it turns out that :
there is a update-po task in the generated Makefile of the po/ folder, that does just what I want ;
this tasks looks to file referenced in the POTFILES.in file, which I had forgotten to update.
So it was something stupid.