I have a Moose::Role that contains a network client as an attribute:
package Widget;
use Moose::Role;
has 'network_thingy' => (
isa => Maybe[ThingyClient],
);
And of course, I have a couple concrete Moose classes which use this role:
package BlueWidget;
use Moose;
with 'Widget';
Now it comes to functional testing of the Widgets. We have the ability to create ThingyServer objects, and it would be much faster and overall excellent to directly use ThingyServer objects instead of spinning up a daemon and having a ThingyClient connect to it over the network. Since ThingyClient & ThingyServer conveniently have the exact same methods, this should be easily possible. But of course, Moose is demanding that I use a ThingyClient when the test eventually constructs a BlueWidget.
I did some research, and came across the Moose::Meta documentation. Seemed perfect! So here's the test code:
my $metarole = Moose::Meta::Role->initialize('Widget');
// first remove the old attribute
$metarole->remove_attribute('network_thingy');
I was going to add a new attribute, but I thought I'd check on the state of the role & class first. Now if I dump out the $metarole, it looks great. There's no network_thingy attribute anymore. But if I construct a BlueWidget class, or just peak inside the metaclass...
$metaclass = Moose::Meta::Class->initialize('BlueWidget');
diag Dumper ($metaclass);
... sure enough network_thingy is still there. This is not at all what I expected. How can I modify/remove/replace an attribute of the Widget role at runtime?
When a class consumes a role, attributes are copied from the role to the class. If you then change the attribute in the role, the copy in the class is unaffected.
So you would need to loop through the classes that have consumed the role, and change the attribute in each class. There's a consumers method in Moose::Meta::Role that could help you get a list of classes that have consumed the role, however it only covers classes that have directly consumed the role, and not, say, subclasses of those.
If the classes have been made immutable (__PACKAGE__->meta->make_immutable), you'll need to make them mutable again before you make modify the attribute.
Overall, it's probably a better idea to just alter the role module (i.e. edit the file); not attempt to tweak the attribute at run time. Maybe set isa to a duck_type type constraint?
Related
at the moment I'm try to write an API with Scala. This API should handle file backends, like Smb, S3, FileSystem Storage, etc.
So I wrote some classes like Storage which is a base class for storage backends and the Subclasses like FileSystemStorage, SmbStorage which subclasses Storage but from now on, i want to use those classes if i specify them in a settings file.
I wanted it like it is in Django: https://docs.djangoproject.com/en/1.6/ref/settings/#std:setting-DEFAULT_FILE_STORAGE Where i could specifiy a string, to my default storage engine.
And then it should "magically" work so that I could use DefaultStorage to access either FileSystemStorage or SmbStorage also it should be possible to create more "storage" classes. is this even possible?
Currently I have something in my mind how i could realize this, but I'm unsure if this is good practice in scala.
JVM classes are already loaded dynamically. What you want is to choose an instance dynamically.
You can do something like:
def byName(name:String) = name match {
"FileSystemStorage" => FileSystemStorage
"SmbStorage" => SmbStorage
}
I am assuming these are objects. If they are classes, just add a new keyword.
Now, if the class name is unknown at compile time you can do Class.forName(full_qualified_classname). But this will give you a Class object, not the instance for the class, in which case you will need to invoke newInstance (assuming it has an argument-less constructor). The way you described your problem suggests you don't want this approach.
I have a class (Foo), Moose based, that has 4 properties, lets say:
SF1...SF4
each of type HashRef[Any].
Currently all have default values. Later, we are going to get these values from a MySQL table.
My approach was, to have the Foo class consume roles depending on where the data comes from, I can store the SF1...SF4 in a role called Foo::DB which it will provide the SF1...SF3 with the default values from the database.
And also to have a role, Foo::Local, which will have the default values hard-coded, so later, when we will use the DB, I will only need to change the 'with....'.
Am I going in the right direction, or I should do it differently?
It's not clear why you need to populate the data from a role. I think you can just use initializer subs. Make your attributes lazy, and then define init_attribute subs. The first time the value of the attribute is needed, if it is not already set, then initializer sub will be called to provide the value. When you plug in the database you can simply teach your initializers how to query the database for the values.
package Foo;
has SF1 => ( is => 'rw', lazy => 1, isa => 'HashRef[Any]' );
sub init_SF1 {
{ hi => 'how are you' }
};
Alternatively, if you want to be able to go back and forth (e.g., for testing), then yes, you can bundle your initializers into the roles and apply the role depending on the situation. Or you can just supply the values inline in your tests. For instance
use Test::More;
use Foo;
my $foo = Foo->new(
SF1 => {
row1 => 'fake test data',
row2 => 'also fake'
},
SF2 => {},
); # now init_SF[12] will not be called
If you tell me why you're doing this, then I can give you a better answer.
Let me see if I understand you correctly.
You have four attributes that should be assigned a default value.
You have defined that default value in the MySQL database schema. What you would like to happen is that anytime you create a Foo instance, the default values will be populated from the defaults you have defined in the MySQL schema.
If I am correct in my understanding of what you are trying to do, then my advice is: Don't do it that way (unless this absolutely a requirement to your project). Define the default values of your attributes using Moose's default or builder properties.
has 'bar' => (
default => 'fubar',
);
If you were to lookup the default values that have been set in the database schema instead of defining them in your class you will create more work for yourself, add unnecessary complexity to your program, and will be adding expensive database calls that could be avoided. You would need to parse the database schema and determine what the defaults should be for the given attribute. You would either need to do this every time you created a new object (expensive) or create a cache of the default values. Sure you could create a Moose extension that implements some magic and does this for you transparently. Seems like a lot of work for a not-so-appealing solution. I would just use Moose's 'default' attribute property unless you have a really good reason not to.
I have a Moose::Role that I would like to call some extra subs on the class when that role is applied to the class.
Is there an easy way to modify what happens when the role is applied, without having to dig too much into Moose::Meta::Role type coding? Ideally, I'd just like to after 'apply' => ... to add the extra stuff.
Edit:
I'm specifically using this with a DBIx::Class::Core result definition to create something like a component that also modifies the constructor. I would just write it as a component if I could get at BUILDARGS and BUILD subs for the result, but I can't seem to do. So, instead of doing load_component, I doing with 'role', but some of the effects of the component are to add belongs_to relationships to the class. Hence, I was thinking the best way to do that is during application of the role to the class.
In a briefly-lived comment I referred you to this question, which discusses how to access the metaclass of the class the role is being applied to (e.g. so you can build onto the class conditionally). However, that's a really stinky use of MooseX::Role::Parameterized providing you that information, and it also won't work if the role is being applied to another role, not to a class.
As an alternative, you could write a sugar function which receives the meta information, and build onto the class in that way:
sub foo
{
my ($meta, %options) = #_;
# based on what is present in %options, add additional attributes...
$meta->add_attribute(...);
}
See Moose::Cookbook::Extending::Recipe4 for an example of writing sugar functions.
You could use a parameterized role. There is an example on how to access the consuming class in the tutorial. That being said, I would advise you to join the Moose and DBIx-Class IRC channels or mailing lists to look for best-practices in this regard.
What I found that works, is compact, and seems in keeping with intent in the docs is to use a trait to modify the meta role used by my particular role:
package DBIx::Class::Meta::Role::MyRole;
use Moose;
BEGIN { extends 'Moose::Meta::Role'; }
after 'apply' => sub {
## ..my mods to add extra relationships to DBIx::Class::Core result
};
no Moose;
package DBIx::Class::MyRole;
use Moose::Role -metaclass => 'DBIx::Class::Meta::Role::MyRole';
Say I have a class that looks like the following:
internal class SomeClass
{
IDependency _someDependency;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
}
And then I want to add functionality that is related but makes use of a different dependency to achieve its purpose. Perhaps something like the following:
internal class SomeClass
{
IDependency _someDependency;
IDependency2 _someDependency2;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
internal string OtherFunctionality_MakesUseOfIDependency2()
{
...
}
}
When I write unit tests for this new functionality (or update the unit tests that I have for the existing functionality), I find myself creating a new instance of SomeClass (the SUT) whilst passing in null for the dependency that I don't need for the particular bit of functionality that I'm looking to test.
This seems like a bad smell to me but the very reason why I find myself going down this path is because I found myself creating new classes for each piece of new functionality that I was introducing. This seemed like a bad thing as well and so I started attempting to group similar functionality together.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
When every instance method touches every instance variable then the class is maximally cohesive. When no instance method shares an instance variable with any other, the class is minimally cohesive. While it is true that we like cohesion to be high, it's also true that the 80-20 rule applies. Getting that last little increase in cohesion may require a mamoth effort.
In general if you have methods that don't use some variables, it is a smell. But a small odor is not sufficient to completely refactor the class. It's something to be concerned about, and to keep an eye on, but I don't recommend immediate action.
Does SomeClass maintain an internal state, or is it just "assembling" various pieces of functionality? Can you rewrite it that way:
internal class SomeClass
{
...
internal string SomeFunctionality(IDependency _someDependency)
{
...
}
internal string OtherFunctionality(IDependency2 _someDependency2)
{
...
}
}
In this case, you may not break SRP if SomeFunctionality and OtherFunctionality are somehow (functionally) related which is not apparent using placeholders.
And you have the added value of being able to select the dependency to use from the client, not at creation/DI time. Maybe some tests defining use cases for those methods would help clarifying the situation: If you can write a meaningful test case where both methods are called on same object, then you don't break SRP.
As for the Facade pattern, I have seen it too many times gone wild to like it, you know, when you end up with a 50+ methods class... The question is: Why do you need it? For efficiency reasons à la old-timer EJB?
I usually group methods into classes if they use a shared piece of state that can be encapsulated in the class. Having dependencies that aren't used by all methods in a class can be a code smell but not a very strong one. I usually only split up methods from classes when the class gets too big, the class has too many dependencies or the methods don't have shared state.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
It is a hint, indicating that your class may be a little incoherent ("doing more than just one thing"), but like you say, if you take this too far, you end up with a new class for every piece of new functionality. So you would want to introduce facade objects to pull them together again (it seems that a facade object is exactly the opposite of this particular design rule).
You have to find a good balance that works for you (and the rest of your team).
Looks like overloading to me.
You're trying to do something and there's two ways to do it, one way or another. At the SomeClass level, I'd have one dependency to do the work, then have that single dependent class support the two (or more) ways to do the same thing, most likely with mutually exclusive input parameters.
In other words, I'd have the same code you have for SomeClass, but define it as SomeWork instead, and not include any other unrelated code.
HTH
A Facade is used when you want to hide complexity (like an interface to a legacy system) or you want to consolidate functionality while being backwards compatible from an interface perspective.
The key in your case is why you have the two different methods in the same class. Is the intent to have a class which groups together similar types of behavior even if it is implemented through unrelated code, as in aggregation. Or, are you attempting to support the same behavior but have alternative implementations depending on the specifics, which would be a hint for a inheritance/overloading type of solution.
The problem will be whether this class will continue to grow and in what direction. Two methods won't make a difference but if this repeats with more than 3, you will need to decide whether you want to declare it as a facade/adapter or that you need to create child classes for the variations.
Your suspicions are correct but the smell is just the wisp of smoke from a burning ember. You need to keep an eye on it in case it flares up and then you need to make a decision as how you want to quench the fire before it burns out of control.
I'm using Moose and I need to wrap method calls in my project. It's important that my wrapping code be the most outer modifier. What I've done so far is put my method modifiers in a Moose Role and then applied that role at the end of my class like this:
use Moose::Util;
Moose::Util::apply_all_roles(__PACKAGE__->meta, ('App:Roles::CustomRole'));
__PACKAGE__->meta->make_immutable;
This allows me to be reasonably sure that my my role's modifiers are defined last, therefore giving me the correct behavior for "before" and "after." (The "before" and "after" in the role are called very first and very last.)
I originally thought this would be sufficient, but I now really need to wrap methods in a similar way with "around." Class::MOP, which Moose is built on, applies "around" modifiers very first, therefore they're called after "before" and before "after."
For more detail, here is the current calling order of my modifiers:
CUSTOM ROLE before
before 2
before 1
CUSTOM ROLE around
around
method
around
CUSTOM ROLE around
after 1
after 2
CUSTOM ROLE AFTER
I really need something like this:
CUSTOM ROLE before
CUSTOM ROLE around
before 2
before 1
around
method
around
after 1
after 2
CUSTOM ROLE around
CUSTOM ROLE AFTER
Any ideas on how to get my "around" modifier to be applied / called where I want it to? I know I could do some symbol table hacking (like Class::MOP is already doing) but I'd really rather not.
Simplest solution is to have CUSTOM ROLE define a method that calls the main method and then wrap that.
role MyRole {
required 'wrapped_method';
method custom_role_base_wrapper { $self->wrapped_method(#_) }
around custom_role_base_wrapper { ... }
before custom_role_base_wrapper { ... }
}
The problem you're having is that you're trying to have the CUSTOM ROLE around wrap something other than a method. Which is not what it is designed to do. Other than writing similar symbol table hackery like you've suggested (probably you could argue one of the Moose people into exposing an API in Class::MOP to help get there), the only other solution I can think of is the one above.
If you don't want the extra call stack frame that custom_role_base_wrapper will add, you should look at Yuval's Sub::Call::Tail or using goto to manipulate the call stack.
I'm fairly new to Moose, but why do you do this:
use Moose::Util;
Moose::Util::apply_all_roles(__PACKAGE__->meta, ('App:Roles::CustomRole'));
rather than simply this?
with 'App:Roles::CustomRole';
Regarding your question, it's a bit of a hack, but could you split your around method into before and after methods and apply the role at the end of your class definition (so it is applied in your desired order)? You could use private attributes to save state between the two methods if absolutely necessary.