How to access the ICElements of local variables(variables inside function) and variables in header file? - eclipse

Objective is to access the elements of C-file in eclipse to check customized naming rules for C-elements(global variable, local variable, function declarations).
Tried to access the C-file elements as mentioned below. In this case, only able to access global variables and function names in the .c file.
How local variables(variables inside functions) & variables in included header files can be accessed?
ITranslationUnit tu = CUIPlugin.getDefault().getWorkingCopyManager().getWorkingCopy(input);
ICElement[] ele= src.getChildren();

Local variables
ICElement is mostly used for representing code elements in CDT's various views, such as the Outline View or Type Hierarchy. As such, local variables (which do not appear in these views) do not have an ICElement representation.
For code analysis use cases like this, it's probably better to use the AST API. The AST is a detailed representation of the entire code in a file. It can be accessed via ITranslationUnit.getAST(). You can then use an ASTVisitor to traverse the AST and visit any declarations you like and check their names.
Variables in included header files
There are two sub-categories here: header files inside the project directory, and header files outside the project directory.
Header files inside the project directory have their own ITranslationUnit, and you can use either the ICElement API or the AST API to analyze them with that ITranslationUnit as a starting point. Note that a file does not need to be open in an editor to obtain an ITranslationUnit for it. You can traverse all of the files in the project with something like ICElementVisitor, with the ICProject as a stating point.
Header files outside the project directory do not have an ITranslationUnit, and there is no straightforward way to obtain an AST for them. However, assuming your project's indexer is enabled, the indexer does create ASTs for them and store information from those ASTs in the project's index, which you could examine. There are index APIs that can be used to traverse the index; some relevant ones are IIndexManager.getIndex(ICProject), IIndex.getAllFiles(), and IIndexFile.findNames().
Edit: Additional Tips
1) How to differentiate between function declarations and simple declarations.
I can think of two ways:
Syntactically, based on the structure of the AST. For function definitions, the type of the declaration node will be IASTFunctionDefintion. For variable declarations, it will be IASTSimpleDeclaration, with the decl-specifier being IASTSimpleDeclSpecifier or IASTNamedTypeSpecifier (you additionally want to check that the declarator is not an IASTFunctionDeclarator, to filter out function declarations that are not definitions).
Semantically. If you find the IASTName for the declaration, you can call IASTName.resolveBinding(), and check whether the returned binding is an IFunction or an IVariable.
2) How to get the return type of a function and the variable type?
For these tasks, you need to get the binding. A variable's type can be queried by IVariable.getType(), and a function's return type via IFunction.getType().getReturnType().
3) Is there a way to get an ICElement from an IASTSimpleDeclaration?
There isn't a simple way that I know of. However, you shouldn't need to - if you're traversing the AST, all the information you could want can be found in the AST.

Related

How do I wrap a Matlab library without polluting my path variable?

Let us assume, I want to use a foreign Matlab library with a structure like this:
folderName
play.m
run.m
open.m
If I simply add folderName to my Matlab path variable, it will easily yield name conflicts. I don't want to rename the files, to be able to obtain new releases of the example library (the package concept is not used in the example library). Renaming would need to modify the code as well, if there are calls from one library function to the other.
How do I write local wrappers, which wrap the functions from that example library? My wrappers could then have my desired names and input parameters.
Clarification: How do I use an external library (toolbox) without name conflicts, without renaming and without modifying each function?
Rename files: Makes it hard to update the external library.
Simply put them in a package folder: This will break internal library function calls.
You want to use a package, which will establish a namespace, such that things in the package, are then qualified with the package name. You can find more information here: http://www.mathworks.com/help/matlab/matlab_oop/scoping-classes-with-packages.html

Can I work around name conflicts?

I have a fortran project whith some name conflicts (from doxygen's point of view). Sometimes a local variable in a procedure may have the same name as a subroutine or function. For compilation/linking there are no problems, as the different definitions live separate lives, for instance:
progA/main.f defines and uses the variable delta.
libB/delta.f defines a function named delta.
progB/main.f uses the function delta defined in libB.
progB is linked with libB, progA is not linked with libB.
In this case, when generating call/caller graphs, or linked source code, the variable delta in progA/main.f will be identified as the function delta. Is there some combination of doxygen settings I can use to inform it that progA is not supposed to use definitions in libB, or something similar?
Another issue is that I may have functions/subroutines with the same name in different subdirectories. Again, as long as they are not linked together this does not represent a problem for compilation, but doxygen cannot identify which of them is meant in links, calls, etc. Is there some work around this (without renaming procedures, that is)?

loading parameter files for data different sets

I need to analyse several sets of data which are associated with different parameter sets (one single set of parameters for each set of data). I'm currently struggling to find a good way to store these parameters such that they are readily available when analysing a specific dataset.
The first thing I tried was saving them in a script file parameters.m in the data directory and load them with run([path_to_data,'/parameters.m']). I understand, however, that this is not good coding practice and it also gave me scoping problems (I think), as changes in parameters.m were not always reflected in my workspace variables. (Workspace variables were only changed after Clear all and rerunning the code.)
A clean solution would be to define a function parameters() in each data directory, but then again I would need to add the directory to the search path. Also I fear I might run into namespace collisions if I don't give the functions unique names. Using unique names is not very practical on the other hand...
Is there a better solution?
So define a struct or cell array called parameters and store it in the data directory it belongs in. I don't know what your parameters look like, but ours might look like this:
parameters.relative_tolerance = 10e-6
parameters.absolute_tolerance = 10e-6
parameters.solver_type = 3
.
.
.
and I can write
save('parameter_file', 'parameters')
or even
save('parameter_file', '-struct', 'parameters', *fieldnames*)
The online help reveals how to use -struct to store fields from a structure as individual variables should that be useful to you.
Once you've got the parameters saved you can load them with the load command.
To sum up: create a variable (most likely a struct or cell array) called parameters and save it in the data directory for the experiment it refers to. You then have all the usual Matlab tools for reading, writing and investigating the parameters as well as the data. I don't see a need for a solution more complicated than this (though your parameters may be complicated themselves).

Do I have to put get/set methods in the class definition in matlab ?

Is one forced to place all get and set functions in the class definition file in Matlab ?
I'm asking since this really makes the file a bit messy and defeats the purpose of having a class definition folder.
Yes, if you use property set and get access methods (in fact any method with a dot in the name), you must include them within the classdef file, not in separate files. See the documentation.
However, if you have have a special reason to want to put as much as possible in separate files, you can define methods getMyProp and setMyProp in separate files, and then within the classdef file have the get.myProp and set.myProp functions call them.
If you use them then you need to define them. but you can also define your variables as public.

What is the closest thing MATLAB has to namespaces?

We have a lot of MATLAB code in my lab. The problem is there's really no way to organize it. Since all the functions have to be in the same folder to be called (or you have to add a bunch of folders to MATLAB's path environment variable), it seems that we're doomed have loads of files in the same folder, all in the global namespace. Is there a better way to organize our files and functions? I really wish there were some sort of module system...
MATLAB has a notion of packages which can be nested and include both classes and functions.
Just make a directory somewhere on your path with a + as the first character, like +mypkg. Then, if there is a class or function in that directory, it may be referred to as mypkg.mything. You can also import from a package using import mypkg.mysubpkg.*.
The one main gotcha about moving a bunch of functions into a package is that functions and classes do not automatically import the package they live in. This means that if you have a bunch of functions in different m-files that call each other, you may have to spend a while dropping imports in or qualifying function calls. Don't forget to put imports into subfunctions that call out as well. More info:
http://www.mathworks.com/help/matlab/matlab_oop/scoping-classes-with-packages.html
I don't see the problem with having to add some folder to Matlab's search path. I have modified startup.m so that it recursively looks for directories in my Matlab startup directory, and adds them to the path (it also runs svn update on everything). This way, if I change the directory structure, Matlab is still going to see all the functions the next time I start it.
Otherwise, you can look into object-oriented code, where you store all the methods in a #objectName folder. However, this may lead to a lot of re-writing code that can be avoided by updating the path (there is even a button add with subfolders if you add the folder to the path from the File menu) and doing a bit of moving code.
EDIT
If you would like to organize your code so that some functions are only visible to the functions that call them directly (and if you don't want to re-write in OOP), you put the calling functions in a directory, and within this directory, you create a subdirectory called private. The functions in there will only be visible to the functions in the parent directory. This is very useful if you have to overload some built-in Matlab functions for a subset of your code.
Another way to organize & reuse code is using matlab's object-oriented features. Each Object is customarily in a folder that begins with an "#" and has the file(s) for that class inside. (though the newer syntax does not require this for a class defined in a single file.) Using private folders inside class folders, matlab even supports private class members. Matlab's new class notation is relatively fully-featured, but even the old syntax is useful.
BTW, my startup.m that examines a well-known location that I do my SVN checkouts into, and adds all of the subfolders onto my path automatically.
The package system is probably the best. I use the class system (#ClassName folder), but I actually write objects. If you're not doing that, it's silly just to write a bunch of static methods. One thing that can be helpful is to put all your matlab code into a folder that isn't on the matlab path. Then you can selectively add just the code you need to the path.
So say you have two projects, stored in "c:\matlabcode\foo" and "c"\matlabcode\bar", that both use common code stored in "c:\matlabcode\common," you might have a function "setupPaths.m" like this:
function setupPaths(projectName)
basedir = fullfile('c:', 'matlabcode');
addpath(genpath(fullfile(basedir, projectName)));
switch (projectName)
case {'foo', 'bar'}
addpath(genpath(fullfile(basedir, 'common')));
end
Of course you could extend this. An obvious extension would be to include a text file in each directory saying what other directories should be added to the path to use the functions in that directory.
Another useful thing if you share code is to set up a "user specific/LabMember" directory structure, where you have different lab members save code they are working on. That way you have access to their code if you need it, but don't get clobbered when they write a function with the same name as one of yours.