how can I modify gvim to follow objects and variables to their definition? - eclipse

I am used to using (ctrl+click) on Eclipse and following variables/objects to look at the definition in order to understand the code.
I just started my first job and I only have access to unix (vi or gvim). Is it possible to do what I'm looking for?
edit: What I mean by is it possible? Lets say class foo is defined in file foo.hpp and is instantiated in foo.cpp. I want to be able to reach the definition of class foo from any instantiation of it in foo.cpp

With Vim you can use tags files generated by exuberant-ctags and other compatible programs.
"Tags" are function and variables, their name, signature and kind are stored alongside their location in files that Vim is able to parse to allow you to navigate through your code.
:help tags will tell you all you need to know.

Related

ILE RPG Bind by reference using CRTSQLRPGI

I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.

Generating Swift files from templates

My goal is to create (find if exist) a tool which can produce swift files from templates.
For example, let’s say I need to create new ViewController with UITableView. It should be based on MVVM architecture with dependency injection. Let’s name this View “PersonsList”.
So, for this task I need to produce:
PersonListViewController
PersonListViewModel
PersonListViewModelProtocol
PersonCell
VM for cell and protocol for VM
Lots of files.
I want to say to my tool something like that
create tableview-template Person
and as a result get generated files. Files should contain empty implementation of each classes.
How should I do that? I am thinking about simple console app but I don’t know which language I should use. Maybe there is a better idea? Maybe there is a ready tool? Any help? :)
You could manually create the templates yourself and then write a short script (in Python / bash / swift etc) that goes through and replaces keywords with arguments you've passed in.

How do I execute classes in Puppet

I just started using puppet. I don't know how to execute classes in puppet.
I've my files "config.pp init.pp install.pp service.pp".
For example install.pp :
class sshd::install{ ... }
Next, i declare my class in init.pp with "include sshd::install".
I also tried to run classes with :
class{'sshd::install':} -> class{'sshd::config':} ~> class{'sshd::service':}
After that, i launch "puppet apply init.pp" but nothing.
My scripts work individualy, but with classes i don't know how to execute all my classes.
Thanks
I'm not sure how much research you've done into Puppet and how its code is structured, but these may help:
Module Fundamentals
Digital Ocean's guide.
It appears that you are starting out with a basic module structure (based on your use of init/install/service), which is good, however your execution approach is that of a direct manifest (Not the module itself) which won't work within the module you are testing due to autoloading unless your files are inside a valid module path.
Basically: You want to put your class/module structured code within Puppet's module path (puppet config print modulepath) then you want to use another manifest file (.pp) to include your class.
An example file structure:
/etc/puppetlabs/code/modules/sshd/manifests/init.pp
install.pp
service.pp
/tmp/my_manifest.pp
Your class sshd(){ ... } code goes in the init.pp, and class sshd::install(){ ... } goes in install.pp etc...
Then the 'my_manifest.pp' would look something like this:
include ::sshd
And you would apply with: puppet apply /tmp/my_manifest.pp.
Once this works, you can learn about the various approaches to applying manifests to your nodes (direct, like this, using an ENC, using a site.pp, etc... Feel free to do further reading).
Alternatively, as long as the module is within your modulepath (as mentioned above) you could simply do puppet apply -e 'include ::sshd'
In order to get the code that you have to operate the way you are expecting it to, it would need to look like this:
# Note: This is BAD code, do not reproduce/use
class sshd() {
class{'sshd::install':} ->
class{'sshd::config':} ~>
class{'sshd::service':}
}
include sshd
or something similar, which entirely breaks how the module structure works. (In fact, that code will not work without the module in the correct path and will display some VERY odd behavior if executed directly. Do not write code like that.)

How to resolve bindings during execution with embedded Python?

I'm embedding Python into a C++ application. I plan to use PyEval_EvalCode to execute Python code, but instead of providing the locals and globals as dictionaries, I'm looking for a way to have my program resolve symbol references dynamically.
For example, let's say my Python code consists of the following expression:
bear + lion * bunny
Instead of placing bear, lion and bunny and their associated objects into the dictionaries that I'm passing to PyEval_EvalCode, I'd like the Python interpreter to call back my program and request these named objects.
Is there a way to accomplish this?
By providing the locals and globals dictionaries, you are providing the environment in which the evaled code is executed. That effectively provides you with an interface to map names to objects defined in the C++ app.
Can you clarify why you do not want to use the dictionaries?
Another thing you could do is process the string in C++ and do string substitution before you eval the code....
Possibly. I've never tried this but in theory you might be able to implement a small extension class in C++ that overrides the __getattr__ method (probably via the tp_as_mapping or tp_getattro function pointers of PyTypeObject). Pass an instance of this as locals and/or globals to PyEval_EvalCode and your C++ method should be asked to resolve your lions, tigers, & bears for you.

What is the closest thing MATLAB has to namespaces?

We have a lot of MATLAB code in my lab. The problem is there's really no way to organize it. Since all the functions have to be in the same folder to be called (or you have to add a bunch of folders to MATLAB's path environment variable), it seems that we're doomed have loads of files in the same folder, all in the global namespace. Is there a better way to organize our files and functions? I really wish there were some sort of module system...
MATLAB has a notion of packages which can be nested and include both classes and functions.
Just make a directory somewhere on your path with a + as the first character, like +mypkg. Then, if there is a class or function in that directory, it may be referred to as mypkg.mything. You can also import from a package using import mypkg.mysubpkg.*.
The one main gotcha about moving a bunch of functions into a package is that functions and classes do not automatically import the package they live in. This means that if you have a bunch of functions in different m-files that call each other, you may have to spend a while dropping imports in or qualifying function calls. Don't forget to put imports into subfunctions that call out as well. More info:
http://www.mathworks.com/help/matlab/matlab_oop/scoping-classes-with-packages.html
I don't see the problem with having to add some folder to Matlab's search path. I have modified startup.m so that it recursively looks for directories in my Matlab startup directory, and adds them to the path (it also runs svn update on everything). This way, if I change the directory structure, Matlab is still going to see all the functions the next time I start it.
Otherwise, you can look into object-oriented code, where you store all the methods in a #objectName folder. However, this may lead to a lot of re-writing code that can be avoided by updating the path (there is even a button add with subfolders if you add the folder to the path from the File menu) and doing a bit of moving code.
EDIT
If you would like to organize your code so that some functions are only visible to the functions that call them directly (and if you don't want to re-write in OOP), you put the calling functions in a directory, and within this directory, you create a subdirectory called private. The functions in there will only be visible to the functions in the parent directory. This is very useful if you have to overload some built-in Matlab functions for a subset of your code.
Another way to organize & reuse code is using matlab's object-oriented features. Each Object is customarily in a folder that begins with an "#" and has the file(s) for that class inside. (though the newer syntax does not require this for a class defined in a single file.) Using private folders inside class folders, matlab even supports private class members. Matlab's new class notation is relatively fully-featured, but even the old syntax is useful.
BTW, my startup.m that examines a well-known location that I do my SVN checkouts into, and adds all of the subfolders onto my path automatically.
The package system is probably the best. I use the class system (#ClassName folder), but I actually write objects. If you're not doing that, it's silly just to write a bunch of static methods. One thing that can be helpful is to put all your matlab code into a folder that isn't on the matlab path. Then you can selectively add just the code you need to the path.
So say you have two projects, stored in "c:\matlabcode\foo" and "c"\matlabcode\bar", that both use common code stored in "c:\matlabcode\common," you might have a function "setupPaths.m" like this:
function setupPaths(projectName)
basedir = fullfile('c:', 'matlabcode');
addpath(genpath(fullfile(basedir, projectName)));
switch (projectName)
case {'foo', 'bar'}
addpath(genpath(fullfile(basedir, 'common')));
end
Of course you could extend this. An obvious extension would be to include a text file in each directory saying what other directories should be added to the path to use the functions in that directory.
Another useful thing if you share code is to set up a "user specific/LabMember" directory structure, where you have different lab members save code they are working on. That way you have access to their code if you need it, but don't get clobbered when they write a function with the same name as one of yours.