Using and modifying global variables across multiple modules in Python - import

Python is my newest language (Python-2.6), my background is in C/C++. Normally, I would create a global variable and be able to modify and access it across all of my files. I am trying to achieve that same functionality in python.
Based off of this post: Python - Visibility of global variables in imported modules I see that "Globals in Python are global to a module, not across all modules". I have multiple variables that rely on user input, and must be accessed and modified by four different modules and in the main code, which is why I am trying to use global(ish) variables in Python.
I have two attemps to make this work:
1) If I stick the code with the modifiable global variables in my main code, I run into the issue of "Executing the Main Module Twice" as detailed here: http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html The program works, other than being executed twice.
2) If I create a separate module and put the variables into a function, I find that I have undefined variables in my other modules and my code will error out. From steping through the code, I see that when the variables are accessible/modifiable by moduleA, but then everything breaks when moduleB and moduleC try to use them, because the modifications did not stay.
How do I utilize Python to suit my needs? I believe my problem lies with attempting to use global variables and importing the global variables

To solve this issue, I stopped trying to use global variables across multiple modules in my main code and stopped importing individual variables. I basically ended up placing moduleA(a single function) within moduleB. I created a class within moduleB for the functions I needed. The main code and moduleC now accesses those functions.
I still use global variables though, within the functions in moduleB. If I made the variables not global I got attribute errors.

Related

How to set Environment Variables with SystemVerilog?

My current project sets an environment variable in a perl module and then later on makes a call from a SystemVerilog file to a function that uses that variable. The requirement is that whatever we added in the perl module is present in the environment variable on time of the call.
The problem however is that something between the perl module and systemverilog call meddles with my variable. I can't figure out what it is and fixing this issue is not pertinent to my project so I just want to set the variable to whatever the perl module sets it to and move on.
There's a handy getenv function in Perl and I am able to use getenv in SV as well. But there doesn't seem to be a setenv. What is the appropriate way to set an environment variable in SV?
Is the perl code invoked from within SystemVerilog using a $system() call? If so, environment changes made by the perl code will definitely NOT propagate back to the SV world, because those changes are made only in the $system() subprocess's environment.
The setenv() system call works for me via SystemVerilog DPI-C in all the tools I use (recent Fedora OS, recent versions of Mentor/Cadence/Synopsys simulators), but there may be some older *nix systems on which it's not available. I used the prototype as given in "man 3 setenv". Looking at discussions on other StackOverflow forums, it seems that using putenv() is not a great idea, especially from the DPI where you have no idea what will happen to the memory used for the DPI string argument. setenv() makes a copy of its argument strings, and should not be at risk from that problem.
It seems to me that if your tool flow isn't correctly propagating environment variables in the way you intend, then you have bigger problems than how to mess with the env from SystemVerilog. I specifically chose NOT to add environment-modifying functions to the svlib utility library, precisely because using the environment is a very bad way to communicate information within a SV simulation. I guess it would make sense if you need to set up an environment for some external program that you would then invoke using a SV $system() call.
Mh... Turns out the answer is trivial but this is the only thread to this question so I'll leave it up in case someone else finds themselve in a similar situation:
SystemVerilog does not have a setenv( ) or getenv( ) function. They're actually implemented from C using the following construct:
module/program foo();
import "DPI-C" function <return type> foonction(<function arguments>);
endmodule/program;
Apparently in my case someone had done this for getenv( ) but never setenv ( ). Reason I didn't catch it was because my code in question was included the following way:
**foo.sv**
if(var.bit) begin
call_function();
use_environment_variable();
end
**bar.sv**
module bar();
<do stuff>
`include foo.sv <-- foo code is copied in after calculations have occured.
endmodule
Trying to import DPI-C in foo.sv will trigger an error because the import will arrive after calculations have taken place. To solve this we need to import in bar.sv like so:
module bar();
import "DPI-C" function int setenv(string name, string value, int override);
<do stuff>
`include foo.sv
endmodule
Setting environment variables from SV is not very useful unless you are running another executable from SV.
If you want to get environment variables, you can use Verilab's svlib functions:
function automatic string sys_getEnv(string envVar);
function automatic bit sys_hasEnv(string envVar);

Checking the functions that need to be used by a script in Matlab

I have a package of code that's been written by someone else. I am running a script, which calls some functions, which in turn calls some more functions, etc. I would like to get the list of functions that are not MATLAB built-in functions but are a part of the package.
I tried using matlab.codetools.requiredFilesAndProducts('file.m'), which gives me a list of such functions, but not all the functions. I can see when I look at the code that there are many more functions called by a function in the script. Does this command only show 'first-level' functions? How can I get the full list?
This function performs well, but isn't guaranteed to get all files. Works fine for me though
http://www.mathworks.com/matlabcentral/fileexchange/15484-recursively-get-file-dependencies-of-a-given-function/content/getFileDependencies.m
Check out the function inmem which may help to solve this task. It displays all matlab functions that are currently in memory. Thus it lists those functions that have been recently called, i.e. that have been called since the last clear allor clear functions statement. Thus you would start with a clean workspace, execute your program, and check with inmem which functions are loaded into the cache and are not in the matlab install directory, those are the functions you are interested in.
You may also use the command-line helper disp-inmem that was scripted to (half-)automate this task.

Cucumber's AfterConfiguration can't access helper modules

I have a modular Sinatra app without a DB and in order to test memcache, I have some test files that need to be created and deleted on the file system. I would like to generate these files in an AfterConfiguration hook using some helper methods (which are in a module shared with rspec, which also needs to create/delete these files for testing). I only want to create them once at the start of Cucumber.
I do not seem to be able to access the helpers from within AfterConfiguration, which lives in "support/hooks.rb." The helpers are accessible from Cucumber's steps, so I know they have been loaded properly.
This previous post seems to have an answer: Want to load seed data before running cucumber
The second example in this answer seems to say my modules should be accessible to my AfterConfiguration block, but I get "undefined method `foo' for nil:NilClass" when attempting to call helper method "foo".
I can pull everything out into a rakefile and run it that way, but I'd like to know what I'm missing here.
After digging around in the code, it appears that AfterConfiguration not only runs before any features are loaded, but before World is instantiated. Running self.class inside of the AfterConfig block returns NilClass. Running self.class inside of any other hook, such as a Before, will return MyWorldName. In retrospect, this makes sense as every feature is run in a separate instance of World.
This is why helpers defined as instance methods (ie def method_name) are unknown. Changing my methods to module methods (ie def ModuleName.method_name) allows them to function, since they really are module methods anyway.

Perl shallow syntax check? ie. do not check syntax of imports

How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.

In Lua how do you import modules?

Do you use
require "name"
or
local name = require "name"
Also, do you explicitly declare system modules as local variables? E.g.
local io = require "io"
Please explain your choice.
Programming in Lua 2ed says "if she prefers to use a shorter name for a module, she can set a local name for it" and nothing about local m = require "mod" being faster than require "mod". If there's no difference I'd rather use the cleaner require "mod" declaration and wouldn't bother writing declarations for pre-loaded system modules.
Either of them works. Storing it in a local will not change the fact that the module may have registered global functions.
Some libraries work only one way, and some only work the other way
The require "name" syntax was the one introduced in lua 5.1; as a note; this call does not always return the module; but it was expected a global would be created with the name of the library (so, you now have a _G.name to use the library with). eg, earlier versions of gsl-if you did local draw = require"draw" the local draw would contain true; and shadow the global draw created by the library.
The behaviour above is encouraged by the module function ==> now relatively deprecated, any well written new code will not use it.
The local name = require"name" syntax became preferred recently (about 2008?); when it was decided that modules setting any globals for you was a bad thing.
As a point: all my libraries do not set globals, and just return a table of functions; or in some other cases, they return an function that works as an initialiser for the root object.
tldr;
In new code, you should use the latter local name = require"name" syntax; it works in the vast majority of cases, but if you're working with some older modules, they may not support it, and you'll have to just use require"module".
To answer your added question:
do you require the system modules?: no; you just assume they are already required; but I do localise all functions I use (usually grouped into lines by the module they came from), including those from external libraries. This allows you to easily see which functions your code actually relies on; as well as removing all GETGLOBALs from your bytecode.
Edit: localising functions is now discouraged. To find accidental globals use a linter like luacheck
Sample module in (my) preferred style; will only work with the local name = require"name" syntax.
local my_lib = require"my_lib"
local function foo()
print("foo")
end
local function bar()
print("bar", my_lib.new())
end
return {
foo = foo;
bar = bar;
}
I would say it mainly boils down to what you prefer, but there are some exceptions depending on what you are writing. Creating a local reference will gain you some speed, but in most cases this isn't a meaningful optimization to do. You should also point your local to the function you are using and not the module table.
If you are writing on a library then I would recommend creating local references for all the global variables you use. Even if you aren’t using the module() function, which changes the global scope for the code that follows. The reason for this is that it prevents your library from calling globals which have been altered after the library was loaded.
Doing local io = require’io’ will ensure you that you get the table from package.loaded.io, even if _G.io has been replaced by another table. I generally don’t do this myself. I expect io to already be loaded and unmodified when I write Lua.
You also have to remember that there are several ways to write a Lua module. Some modules don't return their module table, while others don't create any global variable. The most common solution is to both create and return a global, but you can't always rely on this.