What is the best way of adding a new object in the entity framework. The designer adds all these create methods, but to me it makes more sense to call new on an object. The generated CreateCustomer method e.g. could be called like this:
Customer c = context.CreateCustomer(System.Guid.NewGuid(), "Name"));
context.AddToCustomer(c);
where to me it would make more sense to do:
Customer c = new Customer {
Id = System.Guid.NewGuid(),
Name = "Name"
};
context.AddToCustomer(c);
The latter is much more explicit since the properties that are being set at construction are named. I assume that the designer adds the create methods on purpose. Why should I use those?
As Andrew says (up-voted), it's quite acceptable to use regular constructors. As for why the "Create" methods exist, I believe the intention is to make explicit which properties are required. If you use such methods, you can be assured that you have not forgotten to set any property which will throw an exception when you SaveChanges. However, the code generator for the Entity Framework doesn't quite get this right; it includes server-generated auto increment properties, as well. These are technically "required", but you don't need to specify them.
You can absolutely use the second, more natural way. I'm not even sure of why the first way exists at all.
I guess it has to do with many things. It looks like factory method to me, therefore allowing one point of extension. 2ndly having all this in your constructor is not really best practice, especially when doing a lot of stuff at initialisation. Yes, your question seems reasonable, i even agree with it, however, in terms of object design, it is more practical as they did it.
Regards,
Marius C. (c_marius#msn.com)
Related
I'm a newbie in developing in CRM, so I want to get some things clear. And you first you need to know the reason why you have to do something in order to fully understand it. So lets get to the question.
I know you have to do this when making a plugin:
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
if (context.InputParameters.Contains("Target") && context.InputParameters.["Target"] is Entity)
{
var entity = (Entity)context.InputParameters["Target"];
if(entity.LogicalName == "myEntity")
{
//Do something with your entity
}
}
Now, in the PluginRegistration Tool, you can set up that your plugin will be fired on the defined Message and which entities (and specific attributes from them) will be affected by it, besides other stuff.
I can see the validations are very useful when manipulating several entities with a single plugin.
Now, let's suppose you only are updating a single entity with your plugin. Why should I check if the entity that's on "Target" is the entity I want to work on, if I already know because I set it up for that entity in particular? What can an Entity be otherwise in that scenario?
Also, in what cases "Target" is NOT an Entity (in the current context)?
Thanks in advance, and sorry if this is a silly question.
See this answer: Is Target always an Entity or can it be EntityReference?
Per the SDK (https://msdn.microsoft.com/en-us/library/gg309673.aspx):
Note that not all requests contain a Target property that is of type
Entity, so you have to look at each request or response. For example,
DeleteRequest has a Target property, but its type is
EntityReference.
The bottom line is that you need to look at the request (all plugin's fire on an OrganizationRequest) of which there are many derived types (https://msdn.microsoft.com/en-us/library/microsoft.xrm.sdk.organizationrequest.aspx) to determine the type for the Target property.
As Nicknow said, the Input Parameters will change depending on the Message being executed.
You can get that information from the MSDN (every request will list the Input Parameters under the Properties section, for instance the CreateRequest) or using the ParameterBrowser.
There's also a really nice alternative to deal with this in a type-safe approach (intellisense included) described in the following blog article: mktange's blog.
I am looking through the Xamarin Sport app code and trying to understand some of the cool things they are doing in it. I cannot understand what IsDirty is being used for exactly. It gets defined here and implemented here and used in many places, such as here.
I read a little about and ICommand's IsDirty property so maybe it is a way to call an entire model as being dirty, but what implications does that have?
I also see it being used here which I am assuming is why they created it in the first place.
Thanks for y'all's insight into it.
They're just using it as a clever way to handle modification detection. Consider a "Save Changes" feature; you don't actually want to enable the "Save" button until something has changed, and you can key off the IsDirty property to test that.
Technically, you could handle this yourself by having a base class hook INotifyPropertyChanged.PropertyChanged and maintaining a dirty bit of your own (possibly in a base class), but rather than require all of their classes to have an IsDirty property that they may or may not need, they've made it an optional feature that a class can implement. For example, take a look at GameResult for an example of something that can't be changed, and therefore, can't be marked as dirty.
With this approach, you've minimized the amount of code you need to write to implement this functionality. All your derived classes need to do is derive from BaseNotify, implement IDirty, and call SetPropertyChanged(...) as the setter to set the private tracking field, signal to any observers that a property has changed, and automatically set the dirty bit.
NOTE: I did just make an interesting observation: while the implementation of the SetProperty extension method does set the IsDirty flag, the BaseNotify class' IsDirty implementation doesn't call anything to bubble up a PropertyChanged event for IsDirty, which means bindings against it won't get updated when it changes. I believe the fix would be for that extension method to invoke PropertyChanged with the property name "IsDirty":
if(dirty != null) {
dirty.IsDirty = true;
handler.Invoke(sender, new PropertyChangedEventArgs("IsDirty"));
// Yes, I'm a bad person for hard-coding the name.
}
Alternately, you could defer signaling the IsDirty change until after you signal the original property has changed. I just chose to keep it with the original logic.
I think it's relatively simple and you're on the right track: the purpose of that property is to have an easy way to know whether some property has been changed, so the whole object has to be saved. It's baked in to the way property changes are propagated, so you don't have to set it by yourself whenever a property value is being set.
tl;dr: You can use it to check if you your (view)model is worth a save operation ,-).
We have following code structure in our code
namedParamJdbcTemplate.query(buildMyQuery(request),new MapSqlParameterSource(),myresultSetExtractor);
and
namedParamJdbcTemplate.query(buildMyQuery(request),new BeanPropertySqlParameterSource(mybean),myresultSetExtractor);
How can I expect these method calls without using isA matcher?
Assume that I am passing mybean and myresultSetExtractor in request for the methods in which above code lies.
you can do it this way
Easymock.expect(namedParamJdbcTemplateMock.query(EasyMock.anyObject(String.class),EasyMock.anyObject(Map.class),EasyMock.anyObject(ResultSetExtractor.class))).andReturn(...);
likewise you can do mocking for other Methods as well.
hope this helps!
good luck!
If you can't use PowerMock to tell the constructors to return mock instances, then you'll have to use some form of Matcher.
isA is a good one.
As is anyObject which is suggested in another answer.
If I were you though, I'd be using Captures. A capture is an object that holds the value you provided to a method so that you can later perform assertions on the captured values and check they have the state you wanted. So you could write something like this:
Capture<MapSqlParameterSource> captureMyInput = new Capture<MapSqlParameterSource>();
//I'm not entirely sure of the types you're using, but the important one is the capture method
Easymock.expect(namedParamJdbcTemplateMock.query(
EasyMock.anyObject(Query.class), EasyMock.capture(captureMyInput), EasyMock.eq(myresultSetExtractor.class))).andReturn(...);
MapSqlParameterSource caughtValue = captureMyInput.getValue();
//Then perform your assertions on the state of your caught value.
There are lots of examples floating around for how captures work, but this blog post is a decent example.
Is there any way to create a fake from a System.Type object in FakeItEasy? Similar to:
var instance = A.Fake(type);
I try to write a fake container for AutoFac that automatically return fakes for all resolved types. I have looked in the code for FakeItEasy and all methods that support this is behind internal classes but I have found the interface IFakeObjectContainer that looks pretty interesting, but the implementations still need registration of objects that is the thing that I want to come around.
As of FakeItEasy 2.1.0 (but do consider upgrading to the latest release for more features and better bugfixes), you can create a fake from a Type like so:
using FakeItEasy.Sdk;
…
object fake = Create.Fake(type);
If you must use an earlier release, you could use some reflection based approach to create a method info for the A.Fake() method. (since it's about auto mocking this shouldn't be a problem really).
This is best done using a registration handler. You should look into how AutofacContrib.Moq implements its MoqRegistrationHandler. You'll see that it is actually using the generic method MockRepository.Create to make fake instances. Creating a similar handler for FakeItEasy should be quite simple.
Say I have a class that looks like the following:
internal class SomeClass
{
IDependency _someDependency;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
}
And then I want to add functionality that is related but makes use of a different dependency to achieve its purpose. Perhaps something like the following:
internal class SomeClass
{
IDependency _someDependency;
IDependency2 _someDependency2;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
internal string OtherFunctionality_MakesUseOfIDependency2()
{
...
}
}
When I write unit tests for this new functionality (or update the unit tests that I have for the existing functionality), I find myself creating a new instance of SomeClass (the SUT) whilst passing in null for the dependency that I don't need for the particular bit of functionality that I'm looking to test.
This seems like a bad smell to me but the very reason why I find myself going down this path is because I found myself creating new classes for each piece of new functionality that I was introducing. This seemed like a bad thing as well and so I started attempting to group similar functionality together.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
When every instance method touches every instance variable then the class is maximally cohesive. When no instance method shares an instance variable with any other, the class is minimally cohesive. While it is true that we like cohesion to be high, it's also true that the 80-20 rule applies. Getting that last little increase in cohesion may require a mamoth effort.
In general if you have methods that don't use some variables, it is a smell. But a small odor is not sufficient to completely refactor the class. It's something to be concerned about, and to keep an eye on, but I don't recommend immediate action.
Does SomeClass maintain an internal state, or is it just "assembling" various pieces of functionality? Can you rewrite it that way:
internal class SomeClass
{
...
internal string SomeFunctionality(IDependency _someDependency)
{
...
}
internal string OtherFunctionality(IDependency2 _someDependency2)
{
...
}
}
In this case, you may not break SRP if SomeFunctionality and OtherFunctionality are somehow (functionally) related which is not apparent using placeholders.
And you have the added value of being able to select the dependency to use from the client, not at creation/DI time. Maybe some tests defining use cases for those methods would help clarifying the situation: If you can write a meaningful test case where both methods are called on same object, then you don't break SRP.
As for the Facade pattern, I have seen it too many times gone wild to like it, you know, when you end up with a 50+ methods class... The question is: Why do you need it? For efficiency reasons à la old-timer EJB?
I usually group methods into classes if they use a shared piece of state that can be encapsulated in the class. Having dependencies that aren't used by all methods in a class can be a code smell but not a very strong one. I usually only split up methods from classes when the class gets too big, the class has too many dependencies or the methods don't have shared state.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
It is a hint, indicating that your class may be a little incoherent ("doing more than just one thing"), but like you say, if you take this too far, you end up with a new class for every piece of new functionality. So you would want to introduce facade objects to pull them together again (it seems that a facade object is exactly the opposite of this particular design rule).
You have to find a good balance that works for you (and the rest of your team).
Looks like overloading to me.
You're trying to do something and there's two ways to do it, one way or another. At the SomeClass level, I'd have one dependency to do the work, then have that single dependent class support the two (or more) ways to do the same thing, most likely with mutually exclusive input parameters.
In other words, I'd have the same code you have for SomeClass, but define it as SomeWork instead, and not include any other unrelated code.
HTH
A Facade is used when you want to hide complexity (like an interface to a legacy system) or you want to consolidate functionality while being backwards compatible from an interface perspective.
The key in your case is why you have the two different methods in the same class. Is the intent to have a class which groups together similar types of behavior even if it is implemented through unrelated code, as in aggregation. Or, are you attempting to support the same behavior but have alternative implementations depending on the specifics, which would be a hint for a inheritance/overloading type of solution.
The problem will be whether this class will continue to grow and in what direction. Two methods won't make a difference but if this repeats with more than 3, you will need to decide whether you want to declare it as a facade/adapter or that you need to create child classes for the variations.
Your suspicions are correct but the smell is just the wisp of smoke from a burning ember. You need to keep an eye on it in case it flares up and then you need to make a decision as how you want to quench the fire before it burns out of control.