I recently worked on a project that we had to deployed using powershell scripts. We coded 2000 of lines, more or less, in different files. Some of them were dedicated to common methods but, after coding 500 lines for each one, it was hard to find what method to use or if it was necessary to implement a new one.
So, my question regards to what is the best way to implement a powershell functions library:
Is better having some files with lot of code than having a lot of files with few lines of code?
The answer from #MikeShepard is conceptually the way to go. Here are just a few implementation ideas to go along with it:
I have open-source libraries in a number of languages. My PowerShell API starts with the top-level being organized into different topics, as Mike Shepard suggested.
For those topics (modules) with more than one function (e.g. SvnSupport), each public function is in a separate file with its private support functions and variables, thereby increasing cohesion and decreasing coupling.
To wrap the collection of functions within a topic (module) together, you could enumerate them individually (either by dot-sourcing or including in the manifest, as #Thomas Lee suggested). But my preference is for a technique I picked up from Scott Muc. Use the following code as the entire contents of your .psm1 file and place each of your other functions in separate .ps1 files in the same directory.
Resolve-Path $PSScriptRoot\*.ps1 |
? { -not ($_.ProviderPath.Contains(".Tests.")) } |
% { . $_.ProviderPath }
There is actually quite a lot more to say about functions and modules; the interested reader might find my article Further Down the Rabbit Hole: PowerShell Modules and Encapsulation published on Simple-Talk.com a useful starting point.
You can create a Module where you can store all your script dedicated to common jobs.
I agree with #Christian's suggestion and use a module.
One tatic you might use is to break up the module into multiple scripts and include them all in the final module. You can either explicity dot-source them in a .PSM1 file, or specify the files in a manifest (.PSD1 file).
I tend to have multiple modules based on subject matter (loosely, nouns). For instance, if I had a bunch of functions dealing with MongoDB, I'd have a MongoDB module. That makes it easy to pull them into a session if I need them, but doesn't clutter every session with a bunch of functions that I rarely use. A consistent naming convention will make it easy to know what to import. For example, modMongoDB.psm1 would be an easy name to remember.
As a side note, in 3.0 module loading can be configured to be automatic so there's no need to preload a bunch of modules in your profile.
Related
I'm working on a large data analysis that incorporates lots of different elements and hence I make heavy use of external toolboxes and functions from file exchange and github. Adding them all to the path via startup.m is my current working method but I'm running into problems of shadowing function names across toolboxes. I don't want to manually change function names or turn them into packages, since a) it's a lot of work to check for shadowing and find all function calls and more importantly b) I'm often updating the toolboxes via git. Since I'm not the author all my changes would be lost.
Is there programmatic way of packaging the toolboxes to create their own namespaces? (With as little overhead as possible?)
Thanks for the help
You can achieve this. Basic idea is to make all functions in a private folder and have only one entry point visible. This entry point is the only file seeing the toolbox, and at the same time it sees the toolbox function first regardless of the order in the search path.
toolbox/exampleToolbox.m
function varargout=exampleToolbox(fcn,varargin)
fcn=str2func(fcn);
varargout=cell(1,nargout);
[varargout{:}]=fcn(varargin{:});
end
with toolbox/exampleToolbox/private/foo.m beeing an example function.
Now you can call foo(1,2,3) via exampleToolbox('foo',1,2,3)
The same technique could be used generating a class. This way you could use exampleToolbox.foo(1,2,3)
In java, there is a way to import a class and all of its children in one line:
import java.utils.*
In Perl, I've found I can only import specific classes:
use Perl::Utils::Folder;
use Perl::Utils::Classes qw(new run_class);
Is there similar way like java to import everything that falls under a tree structure, only in Perl?
No, there is not a way to easily do what you are after.
You could walk the relevant paths in your PERL library's filesystem and use every .pm file you came across (that's what Module::Find, as suggested by #Daniel Böhmer, does), but that can miss a few things:
Packages that are declared in funny ways/at runtime.
Multiple packages per module file.
Other cases I haven't thought of.
This is also a bad idea, for a few reasons:
You mentioned "classes" in your question, rather than just packages. Perl packages and subpackages do not necessarily represent classes/instantiable object-oriented code. If you were to programmatically generate a list of all packages in a hierarchy and then call $packagename->new() on each of them, you might have a syntax error, if one of the packages was just a library of functions.
Packages and subpackages often are not directly related, developed by the same people, or used for similar things. Just because a package starts with Net:: doesn't mean that it will obey standard conventions that other Net::-prefixed packages expect. For example, File::Find and File::Tail share a prefix, but have very little to do with each other; the prefix is in common because both utilities work with files as their goal.
Lots of packages do things at BEGIN/INIT/etc time when they're compiled. Some of them (sadly) do different things depending on the order in which they're used relative to other modules. The solution to this problem for module developers is "don't do that", but for module users, it's "use sparingly, and only when needed".
It clutters your local namespace with lots of potentially-exported symbols you don't necessarily need (to conditionally import symbols, you'll have to use import arguments like you're doing in your example; there's no programmatic way to define "symbols I'm interested in", since Perl doesn't have that kind static analysis at compile time . . . not for lots of call styles, at least).
It slows down your program's startup time by compiling things you might not necessarily need. This might not seem important at the early phase of a project, but for larger projects it is very easy to end up in situations where you're pulling in over a thousand CPAN modules when you start Apache (or launch your main script, or whatever), and your app takes more than a minute just to start.
I have a hunch that you're trying to reduce boilerplate (as in: all of your modules have a big block of use statements at the top, and that's duplicated everywhere). There are a few ways to do this, starting with:
Don't: import things in each module as you need them, and use strict/warnings and lots of tests to be told early on if you're calling functionality that you haven't imported yet.
You could also make your own Exporter subclass that uses all of your standard modules and adds the functions that you frequently use from them to its #EXPORTS (or splices their #EXPORTS onto its own, or uses Exporter sub-sub-classing, or something).
Factor your code so that the parts that depend on multiple imported modules live in a single utility module, and import that.
Factor your code so that the parts that depend on the imported modules live in a parent class, and address its methods via instances of subclasses (or SUPER), so your subclasses don't have to explicitly contain the imports, e.g. $instance->method_that_calls_an_imported_function_in_the_parent();
Also, as an aside, using package.* imports in Java is debatable, and has many of the same drawbacks of doing it in Perl.
In Perl, the class Foo::Bar::Foo may not be a subclass of Foo::Bar. Nor, is there any guarantee that a subclass module even has the same class prefix. IO::File is a subclass of IO::Handle and not of IO which isn't even a module.
There also isn't even an easy way to tell of a Perl module is a sub-class of another Perl module. There are (at least) three ways to declare a subclass' relationship to a class:
use parent
use base
The #ISA package variable
It is possible to use #INC to find all modules, then look at the source and look at use parent, use base, and #ISA declarations and build a Perl class matrix, then go through that matrix to load the classes you do need. That will probably be slow and cumbersome, and doesn't even cover Moose based classes.
You're asking the wrong question. You're asking "Find all of the subclasses of a particular class.". This will include classes that you're probably not even interested in. I know (for example LWP) that there can be dozens of various classes and subclasses that include stuff you're not even interested in.
What you should be asking is "What do I need to do?", and then find the classes that fulfill your needs. If these classes happen to be child classes of a particular parent class, these subclasses will load the required class.
We do Java programming here, and one of the standards is not to use asterisks in our import statements. This is considered sloppy programming. If you need a particular class, you should declare it rather than simply declaring a superclass. Many of our reporting tools have problems with asterisk declarations in import statements.
There is a Module::Find module, but I am not sure exactly how it works. I believe it simply assumes that subclasses are in the same module hierarchy as the superclass, but that's far from true in Perl.
In general, I think it is a bad idea to load a whole 'tree' of modules (or subclasses so to speak).
There is definitely something wrong in your design if you need to know all and everything about sub classes/modules. You break the rules of encapsulation and you should not need to know how a class handles its responsibilities.
I am writing a script (lets call it main.m) which calls a function that I wrote (lets call it myfunc.m). It seems I have a few of these myfunc.m functions at different locations on my MATLAB path.
I would like to somehow restrict matlab to only look within the same folder where my main.m class is, when looking for custom functions.
So for example if I have in
C:\example\main.m
C:\example\myfunc.m
and
C:\asd\main.m
C:\asd\myfunc.m
and I open main.m in folder example, when it comes to the call of myfunc.m, it can ONLY call a function within folder C:\example\. Same goes for if I run main.m in folder C:\asd\.
I hope this makes sense, thanks.
In the short term, a fairly quick solution would be for you to make your myfunc.m files into private functions that are ahead in terms of precedence compared to regular functions, and can only be called by functions in the same parent folder.
Simply place your myfunc.m files in a folder called private:
C:\example\main.m
C:\example\private\myfunc.m
and
C:\asd\main.m
C:\asd\private\myfunc.m
Now example\private\myfunc.m is only callable by things in the folder example, and \asd\private\myfunc.m is only callable by things in the folder asd. In addition, they are higher in precedence than other functions, so you can ensure that the right one always gets called.
Longer term, you might benefit from taking a look at some of the other more extensive ways that MATLAB provides for managing namespace conflicts, such as subfunctions, object-oriented programming, and packages.
Subfunctions are extremely simple to get the hang of. Packages are not at all complicated but require a bit of thought about how to organize your code (which is usually well worth it). OO programming is a much larger change in typical programming style, but for larger applications is pretty essential.
Just starting to learn scala for a new project. Have got to the point where I would like to define different properties files for the different environments the app is going to run on, ideally in a similar way to Rails - very lightweight, just one different properties file per environment that is loaded based on its name. I don't really care if it's a java properties file, YML or scala code.
In the spirit of not reinventing the wheel I've been looking to see if there is some accepted standard Scala way of doing this but I can't find one, I've found a few similar but not identical questions here where people suggest using system properties in the startup script but this feels like it would end up being a nightmare.
I could obviously implement it if needs be but feels like the sort of thing that should already exist. So - does it?
I'm using sbt if that makes a difference.
I know of Configgy. Also, Akka/Play 2.0 will be using Config, which looks nice too. See blog about the latter.
Basically, Configgy has been used for a while now, but has been deprecated, while Config will be all-new. However, having Config as the default Typesafe Stack configuration tool will probably make it the preferred tool for that pretty fast.
I have written a Configgy replacement called Configrity. It can use different input formats (like YAML), it's immutable, supports functional patterns and uses type class to convert automatically the values to the desired type.
I have written BeeConfig, a replacement for java.util.Properties except that it is a Scala API and uses UTF-encoded configuration files. It supports string interpolation, chaining and a bunch of other features. But its main objective is simplicity.
Bitbucket | Blog post
Rick
I want to create a big file for all cool functions I find somehow reusable and useful, and put them all into that single file. Well, for the beginning I don't have many, so it's not worth thinking much about making several files, I guess. I would use pragma marks to separate them visually.
But the question: Would those unused methods bother in any way? Would my application explode or have less performance? Or is the compiler / linker clever enough to know that function A and B are not needed, and thus does not copy their "code" into my resulting app?
This sounds like an absolute architectural and maintenance nightmare. As a matter of practice, you should never make a huge blob file with a random set of methods you find useful. Add the methods to the appropriate classes or categories. See here for information on the blob anti-pattern, which is what you are doing here.
To directly answer your question: no, methods that are never called will not affect the performance of your app.
No, they won't directly affect your app. Keep in mind though, all that unused code is going to make your functions file harder to read and maintain. Plus, writing functions you're not actually using at the moment makes it easy to introduce bugs that aren't going to become apparent until much later on when you start using those functions, which can be very confusing because you've forgotten how they're written and will probably assume they're correct because you haven't touched them in so long.
Also, in an object oriented language like Objective-C global functions should really only be used for exceptional, very reusable cases. In most instances, you should be writing methods in classes instead. I might have one or two global functions in my apps, usually related to debugging, but typically nothing else.
So no, it's not going to hurt anything, but I'd still avoid it and focus on writing the code you need now, at this very moment.
The code would still be compiled and linked into the project, it just wouldn't be used by your code, meaning your resultant executable will be larger.
I'd probably split the functions into seperate files, depending on the common areas they are to address, so I'd have a library of image functions separate from a library of string manipulation functions, then include whichever are pertinent to the project in hand.
I don't think having unused functions in the .h file will hurt you in any way. If you compile all the corresponding .m files containing the unused functions in your build target, then you will end up making a bigger executable than is required. Same goes for if you include the code via static libraries.
If you do use a function but you didn't include the right .m file or library, then you'll get a link error.