So I understand that the special UNIVERSAL class is to be considered the base class from which all other objects are created. So if a specific method is not found after Perl traverses the inheritance hierarchy, it will look in the UNIVERSAL class to see if the method can be found there. However, when you create a distribution, no UNIVERSAL.pm package is created. Also, the UNIVERSAL methods 'DOES' and 'can' already exist without a UNIVERSAL.pm... As such, I am not sure if I should be writing UNIVERSAL methods into random packages like so:
sub UNIVERSAL::nicemethod{
launch_teh_missles();
}
Or should I be creating a separate UNIVERSAL package and .pm file? What is considered best practice?
You can add new methods to UNIVERSAL the same way as to any other package:
package
UNIVERSAL; # Line break to fool CPAN indexer
sub nice_method {
...
}
Related
Is it possible to distribute the code of a class to several files?
Honestly, I think the best route is to break it up into different roles that you compose into your class.
After all, how are you planning on breaking up your class?
Are you going to group methods and attributes according to similarity?
At that point you've just about come up with a role, so you might as well make it a role.
If you look at the source for Rakudo you see things like this:
class Perl6::Metamodel::ClassHOW
does Perl6::Metamodel::Naming
does Perl6::Metamodel::Documenting
does Perl6::Metamodel::LanguageRevision
does Perl6::Metamodel::Stashing
does Perl6::Metamodel::AttributeContainer
does Perl6::Metamodel::MethodContainer
does Perl6::Metamodel::PrivateMethodContainer
does Perl6::Metamodel::MultiMethodContainer
does Perl6::Metamodel::MetaMethodContainer
does Perl6::Metamodel::RoleContainer
does Perl6::Metamodel::MultipleInheritance
does Perl6::Metamodel::DefaultParent
does Perl6::Metamodel::C3MRO
does Perl6::Metamodel::MROBasedMethodDispatch
does Perl6::Metamodel::MROBasedTypeChecking
does Perl6::Metamodel::Trusting
does Perl6::Metamodel::BUILDPLAN
does Perl6::Metamodel::Mixins
does Perl6::Metamodel::ArrayType
does Perl6::Metamodel::BoolificationProtocol
does Perl6::Metamodel::REPRComposeProtocol
does Perl6::Metamodel::InvocationProtocol
does Perl6::Metamodel::ContainerSpecProtocol
does Perl6::Metamodel::Finalization
does Perl6::Metamodel::Concretization
does Perl6::Metamodel::ConcretizationCache
{
… # only 300 lines of code
}
If you do a good job of splitting up your roles, you should be able to use them in many classes.
How many classes in the Rakudo code base do you think compose in the Perl6::Metamodel::Naming role?
That role only provides a few things, and is only 45 lines long.
Here is an abbreviated version.
(All of the code in the methods has been deleted here for brevity.)
role Perl6::Metamodel::Naming {
has $!name;
has $!shortname;
method name($obj) {
…
}
method set_name($obj, $name) {
…
}
method shortname($obj) {
…
}
method set_shortname($obj, $shortname) {
…
}
}
Yes, there always is. But there us no standard supported way (yet anyway).
You can take the approach that Raku takes itself when creating the core settings: concatenate the files into a single file, and compile that. When you're building Rakudo from scratch, that's what's happening when you see the line:
+++ Generating gen/moar/foo
The generated files can be inspected in the gen/moar directory. At one point, I brought this up in a problem solving issue, but that never went anywhere. Perhaps that should be revisited.
You can use augment class. But that currently only makes sense inside a single file, because with multiple files it is creating multiple versions of the same "module" in pre-compilation. And the system is not able to determine what to resolve to what. This is when you realize that pre-compilation of Raku modules currently is creating libraries that need to be statically linkable at runtime.
EDIT : got it shorter.
We created three modules following the prism doc and our requirements.
We did a horizontal slices with modules.
SharedServices
BusinessLogic
UserInterface
In the UserInterface we are using Syncfusion components and other packages, and It would be great to put everything in the UserInterface module but how can we reference nuget assemblies from that module in the shell (to apply theming for example) to avoid having references in each modules & the shell ?
Should we add nugetpackage to each module and the shell (is it bad... ?) or is it possible to have one module which defines base class referencing external assemblies for example and that would be themable (with ResourceDictionary) & usable in the whole solution (shell & other modules) .
Thanks.
Very broad question, it might well be closed, but I try to give you a few guiding thoughts:
Generally, you either slice horizontally (as you did, UI-module with all the views plus logic-module with all the services) or vertically (as your Product-module suggests: views, view models, services for the product in one module, those for the user in another).
You can do both, but then you should "slice through", so one module for product-ui, one for user-ui, one for product-services, one for user-services... you get the idea. That means a lot of modules, though.
Also, when creating your modules, have an idea of what you want to achieve. Modules can encapsulate components to be reused in another app. Or they can encapsulate exchangeable components, so you could create a car-sharing app today and tomorrow swap out the car-module for a bike-module and have a bike-sharing app. Or they can be used to enforce segregation of code based on risk analysis in a regulated environment. What I'm trying to convey: don't create modules just to have modules, make each module have a defined purpose.
Also, define the interfaces for the modules. I don't like modules to reference each other, as it effectively destroys all segregation that would otherwise be there. Create seperate non-module assemblies that only contain public interfaces. Then make your modules contain the implementations as internal types. In an ideal world, no module assembly contains a public type. The interface-assemblies can be either per module or per consumer or per link between modules (those checked boxes in your N2-chart, you have one, don't you?).
You want to keep the number of modules reasonable, as well as the dependencies between them (not as in "assembly references" but through interface-assembly).
how can we reference nuget assemblies from that module in the shell (to apply theming for example) to avoid having references in each modules & the shell ?
You should separate the "interface" part (e.g. base classes or DTOs, not part of the module) and the actual services part (that's the module). Example: unity has a nuget package for the interfaces (Unity.Abstractions) and one that contains the container implementation (Unity.Container). There's nothing wrong with everyone referencing the interface, basically, that's saying "I want to use that interface".
Usually I create a "Scala Object" that keeps all my global constants.
I have been told that it is better to use "package object" in order to keep constants.
I have never used "package object" previously so my questions are:
What are the best practices to hold constants in Scala and why?
Why do I need "package object"?
You dont need a package object.
However it allows you to make code available at the package level without declaring another class or object.
It's a convenience.
package foo.bar
package object dem {
// things here will be available in `foo.bar` package and all subpackages
// without the need of an import statement.
}
The only convention specified for constants are regarding casing. Constants should be PascalCased
Keep in mind that declaring constants in the package object might make them available in places where you don't want them to be available.
I leave you the naming convention for package objects:
The standard naming convention is to place the definition above in a file named package.scala that's located in the directory corresponding to package pp.
In a nutshell I try to model a network topology using objects for every instance in the network. Additionally I got a top-level manager class responsible for, well, managing these objects and performing integrity checks. The filestructure looks like this (I left out most of the object-files as they are all structured pretty equal):
Manager.pm
Constants.pm
Classes/
+- Machine.pm
+- Node.pm
+- Object.pm
+- Switch.pm
Coming from quite a few years in OOP, I'm a fan of code reuse etc. so I set up inheritance between thos objects, the inheritance tree (in this example) looks like this:
Switch -+-> Node -+-> Object
Machine -+
All those objects are structured like this:
package Switch;
use parent qw(Node);
sub buildFromXML {
...
}
sub new {
...
}
# additonal methods
Now the interesting part:
Question 1
How can I ensure correct loading of all those objects without typing out the names statically?
The underlying problem is: If I just require "$_" foreach glob("./Classes/*"); I get many "Subroutine new redefined at" errors. I also played around with use parent qw(-norequire Object), Module::Find and some other #INC modifications in various combinations, to make it short: It didn't work. Currently I'm statically importing all used classes, they auto-import their parent classes.
So basically what I'm asking: What is the (perl-)correct way of doing this?
And advanced: It would be very helpful to be able to create a more complex folder structure (as there will be quite a few objects) and still have inheritance + "autoloading"
Question 2 - SOLVED
How can I "share my imports"? I use several libraries (my own, containing some helper functions, LibXML, Scalar::Util, etc.) and I want to share them amongst my objects. (The reasoning behind that is, I may need to add another common library to all objects and chances are high that there will be well above 100 objects - no fun editing all of them manually and doing that with a regex / script would theoretically work but that doesn't seem like the cleanest solution available)
What I tried:
import everything in Manager.pm -> Works inside the Manager package - gives me errors like "undefined subroutine &Switch::trace called"
Create a include.pl file and do/require/use it inside every object - gives me the same errors.
Some more stuff I sadly don't remember
include.pl basically would look like that:
use lib_perl;
use Scalar::Util qw(blessed);
use XML::LibXML;
use Data::Dumper;
use Error::TryCatch;
...
Again I ask: What's the correct way to do it? Am I using the right approach and just failing at the execution or should I change my structure completely?
It doesn't matter that much why my current code doesn't work that well, providing a correct, clean approach for those problems would be enough by far :)
EDIT: Totally forgot perl version -_- Sidenote: I can't upgrade perl, as I need libraries that are stuck with 5.8 :/
C:\> perl -version
This is perl, v5.8.8 built for MSWin32-x86-multi-thread
(with 50 registered patches, see perl -V for more detail)
Copyright 1987-2006, Larry Wall
Binary build 820 [274739] provided by ActiveState http://www.ActiveState.com
Built Jan 23 2007 15:57:46
This is just a partial answer to question 2, sharing imports.
Loading a module (via use) does two things:
Compiling the module and installing the contents in the namespace hierarchy (which is shared). See perldoc -f require.
Calling the import sub on each loaded module. This loads some subs or constants etc. into the namespace of the caller. This is a process that the Exporter class largely hides from view. This part is important to use subs etc. without their full name, e.g. max instead of List::Util::max. See perldoc -f use.
Lets view following three modules: A, B and User.
{
package A;
use List::Util qw(max);
# can use List::Util::max
# can use max
}
{
package User;
# can use List::Util::max -> it is already loaded
# cannot use max, this name is not defined in this namespace
}
Package B defines a sub load that loads a predefined list of modules and subs into the callers namespace:
{
package B;
sub load {
my $package = (caller())[0]; # caller is a built-in, fetches package name
eval qq{package $package;} . <<'FINIS' ;
use List::Util qw(max);
# add further modules here to load
# you can place arbitrarily complex code in this eval string
# to execute it in all modules that call this sub.
# (e.g. testing and registering)
# However, this is orthogonal to OOP.
FINIS
if ($#) {
# Do error handling
}
}
}
Inside the eval'd string, we temporarily switch into the callers package and then load the specified module. This means that the User package code now looks like this:
{
package User;
B::load();
# can use List::Util::max
# can use max
}
However, you have to make sure the load sub is already loaded itself. use B if in doubt. It might be best to execute B::load() in the BEGIN phase, before the rest of the module is compiled:
{
package User;
BEGIN {use B; B::load()}
# ...
}
is equivalent to
{
package User;
use B;
use List::Util qw(max);
# ...
}
TIMTOWTDI. Although I find evaling code quite messy and dangerous, it is the way I'd pursue in this scenario (rather than doing files, which is similar but has different side effects). Manually messing with typeglobs in the package namespace is hell in comparision, and copy-pasting a list of module names is like going back to the days when there wasn't even C's preprocessor.
Edit: Import::Into
… is a CPAN module providing this functionality via an interesting method interface. Using this module, we would redefine our B package the following way:
{
package B;
use List::Util; # you have to 'use' or 'require' this first, before using 'load'.
use Import::Into; # has to be installed from CPAN first
sub load {
my $package = caller;
List::Util->import::into($package, qw(max));
# should work too: strict->import::into($package);
# ...
}
}
This module hides all the dirty work (evaling) from view and does method call resolution gymnastics to allow importing pragmas into other namespaces.
Addendum to Import::Into Solution
I found a scenario that seems to require eval() from within the Import::Into solution. In this scenario, mod User is effectively among the uses from package B. This may be a common scenario for people using Import::Into.
Specifics:
I created module uses_exporter with separate subs for importing
different groups of modules, e.g. load_generic() and
load_list_utils().
The uses in load_list_utils() are to public mods like
List::MoreUtils, AND to a module of my own, list_utils_again. That
local module also calls load_list_utils(). The call fails if
load_list_utils() uses list_utils_again.
My solution was to put the use to list_utils_again into an eval which
does not excecute when $target eq 'list_utils_again'
The correct idiomatic Perl way to do this is not to always load a bunch a modules whether used or not; it is to have every file use those modules it directly (not indirectly) needs.
If it turns out that every file uses the same set of modules, you might make things simpler by having a single dedicated module to use all those in that common set.
How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.