Does IMP imp = imp_implementationWithBlock((void*) objc_unretainedPointer(^(id me, BOOL selected) - objective-c-runtime

The following provides a convenient way to add methods to a class at runtime:
imp_implementationWithBlock((void*) objc_unretainedPointer(^(id me, BOOL selected)
The method can then be added using class_addMethod(). Will these implementations eventually become cached and use the fast-track method dispatching system?

My gut feeling would be yes, because doing otherwise would complicate the delicate, consistent, and beautiful Objective-C runtime :)
Also this link -> http://kevin.sb.org/2006/11/16/objective-c-caching-and-method-swizzling/
Seems pretty confident. They're all Method's in the Class after you call class_addMethod. As far as I can tell, please correct me if I'm wrong, there's no way to distinguish them from ones which were compiled in.

Related

How to create a HyperHTML custom element extended from HTMLButtonElement

I would like to have a semantic named custom element that extends from button: like fab-button
class FabButton extends HTMLButtonElement {
constructor() {
super();
this.html = hyperHTML.bind(this);
}
}
customElements.define("fab-button", FabButton);
Extending for HTMLButtonElement doesn't seem to work.
Is there a way to extend from a non-HTMLElement with the HyperHTML "document-register-element.js"?
Codepen example:
https://codepen.io/jovdb/pen/qoRare
It's difficult to answer this, because it's tough to understand where to start from.
The TL;DR solution though, is here, but it needs a lot of explanations.
Extending built-ins is a ghost in the Web specs
It doesn't matter what WHATWG says, built-ins are a de-facto death specification because Webkit strongly opposed to it and Firefox, as well as Edge, that never even shipped Custom Elements, didn't push to have them neither.
Accordingly, as starting point, it's discouraged to extend built-ins with Custom Elements V1 specification.
You might have luck with V0 though, but that's only Chrome API and it's already one of those APIs created to die (R.I.P. WebSQL).
My polyfill follows specs by definition
The document-register-element polyfill, born with V0 but revamped with V1, follows specifications as close as possible, which is why it makes extending built-ins possible, because WHATWG still has that part in.
That also means you need to understand how extending built-ins works.
It is not by defining a simple class that extends HTMLButtonElement that you get a button, you need to do at least three extra things:
// define via the whole signature
customElements.define(
"fab-button",
FabButton,
{extends: 'button'}
);
... but also ... allow the polyfill to work
// constructor might receive an instance
// and such instance is the upgraded one
constructor(...args) {
const self = super(...args);
self.html = hyperHTML.bind(self);
return self;
}
and most important, its most basic representation on the page would be like this
<button is="fab-button">+</button>
Bear in mind, with ES6 classes super(...args) would always return the current context. It's there to grant super constructors didn't return other instances as upgraded objects.
The constructor is not your friend
As harsh as it sounds, Custom Elements constructors work only with Shadow DOM and addEventListener, but nothing else, really.
When an element is created is not necessarily upgraded yet. in fact, it won't be, most likely, upgraded.
There are at least 2 bugs filed to every browser and 3 inconsistent behaviors about Custom Elements initialization, but your best bet is that once connectedCallback is invoked, you really have the custom element node content.
connectedCallback() {
this.slots = [...this.childNodes];
this.render();
}
In that way you are sure you actually render the component once live on the DOM, and not when there is not even a content.
I hope I've answered your question in a way that also warns you to go away from custom elements built-ins if that's the beginning of a new project: they are unfortunately not a safe bet for the future of your application.
Best Regards.

How to expect on method calls that has inline new instance creations in easymock

We have following code structure in our code
namedParamJdbcTemplate.query(buildMyQuery(request),new MapSqlParameterSource(),myresultSetExtractor);
and
namedParamJdbcTemplate.query(buildMyQuery(request),new BeanPropertySqlParameterSource(mybean),myresultSetExtractor);
How can I expect these method calls without using isA matcher?
Assume that I am passing mybean and myresultSetExtractor in request for the methods in which above code lies.
you can do it this way
Easymock.expect(namedParamJdbcTemplateMock.query(EasyMock.anyObject(String.class),EasyMock.anyObject(Map.class),EasyMock.anyObject(ResultSetExtractor.class))).andReturn(...);
likewise you can do mocking for other Methods as well.
hope this helps!
good luck!
If you can't use PowerMock to tell the constructors to return mock instances, then you'll have to use some form of Matcher.
isA is a good one.
As is anyObject which is suggested in another answer.
If I were you though, I'd be using Captures. A capture is an object that holds the value you provided to a method so that you can later perform assertions on the captured values and check they have the state you wanted. So you could write something like this:
Capture<MapSqlParameterSource> captureMyInput = new Capture<MapSqlParameterSource>();
//I'm not entirely sure of the types you're using, but the important one is the capture method
Easymock.expect(namedParamJdbcTemplateMock.query(
EasyMock.anyObject(Query.class), EasyMock.capture(captureMyInput), EasyMock.eq(myresultSetExtractor.class))).andReturn(...);
MapSqlParameterSource caughtValue = captureMyInput.getValue();
//Then perform your assertions on the state of your caught value.
There are lots of examples floating around for how captures work, but this blog post is a decent example.

Is there any way to create a fake from a System.Type object in FakeItEasy?

Is there any way to create a fake from a System.Type object in FakeItEasy? Similar to:
var instance = A.Fake(type);
I try to write a fake container for AutoFac that automatically return fakes for all resolved types. I have looked in the code for FakeItEasy and all methods that support this is behind internal classes but I have found the interface IFakeObjectContainer that looks pretty interesting, but the implementations still need registration of objects that is the thing that I want to come around.
As of FakeItEasy 2.1.0 (but do consider upgrading to the latest release for more features and better bugfixes), you can create a fake from a Type like so:
using FakeItEasy.Sdk;
…
object fake = Create.Fake(type);
If you must use an earlier release, you could use some reflection based approach to create a method info for the A.Fake() method. (since it's about auto mocking this shouldn't be a problem really).
This is best done using a registration handler. You should look into how AutofacContrib.Moq implements its MoqRegistrationHandler. You'll see that it is actually using the generic method MockRepository.Create to make fake instances. Creating a similar handler for FakeItEasy should be quite simple.

Single Responsibility Principle: do all public methods in a class have to use all class dependencies?

Say I have a class that looks like the following:
internal class SomeClass
{
IDependency _someDependency;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
}
And then I want to add functionality that is related but makes use of a different dependency to achieve its purpose. Perhaps something like the following:
internal class SomeClass
{
IDependency _someDependency;
IDependency2 _someDependency2;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
internal string OtherFunctionality_MakesUseOfIDependency2()
{
...
}
}
When I write unit tests for this new functionality (or update the unit tests that I have for the existing functionality), I find myself creating a new instance of SomeClass (the SUT) whilst passing in null for the dependency that I don't need for the particular bit of functionality that I'm looking to test.
This seems like a bad smell to me but the very reason why I find myself going down this path is because I found myself creating new classes for each piece of new functionality that I was introducing. This seemed like a bad thing as well and so I started attempting to group similar functionality together.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
When every instance method touches every instance variable then the class is maximally cohesive. When no instance method shares an instance variable with any other, the class is minimally cohesive. While it is true that we like cohesion to be high, it's also true that the 80-20 rule applies. Getting that last little increase in cohesion may require a mamoth effort.
In general if you have methods that don't use some variables, it is a smell. But a small odor is not sufficient to completely refactor the class. It's something to be concerned about, and to keep an eye on, but I don't recommend immediate action.
Does SomeClass maintain an internal state, or is it just "assembling" various pieces of functionality? Can you rewrite it that way:
internal class SomeClass
{
...
internal string SomeFunctionality(IDependency _someDependency)
{
...
}
internal string OtherFunctionality(IDependency2 _someDependency2)
{
...
}
}
In this case, you may not break SRP if SomeFunctionality and OtherFunctionality are somehow (functionally) related which is not apparent using placeholders.
And you have the added value of being able to select the dependency to use from the client, not at creation/DI time. Maybe some tests defining use cases for those methods would help clarifying the situation: If you can write a meaningful test case where both methods are called on same object, then you don't break SRP.
As for the Facade pattern, I have seen it too many times gone wild to like it, you know, when you end up with a 50+ methods class... The question is: Why do you need it? For efficiency reasons à la old-timer EJB?
I usually group methods into classes if they use a shared piece of state that can be encapsulated in the class. Having dependencies that aren't used by all methods in a class can be a code smell but not a very strong one. I usually only split up methods from classes when the class gets too big, the class has too many dependencies or the methods don't have shared state.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
It is a hint, indicating that your class may be a little incoherent ("doing more than just one thing"), but like you say, if you take this too far, you end up with a new class for every piece of new functionality. So you would want to introduce facade objects to pull them together again (it seems that a facade object is exactly the opposite of this particular design rule).
You have to find a good balance that works for you (and the rest of your team).
Looks like overloading to me.
You're trying to do something and there's two ways to do it, one way or another. At the SomeClass level, I'd have one dependency to do the work, then have that single dependent class support the two (or more) ways to do the same thing, most likely with mutually exclusive input parameters.
In other words, I'd have the same code you have for SomeClass, but define it as SomeWork instead, and not include any other unrelated code.
HTH
A Facade is used when you want to hide complexity (like an interface to a legacy system) or you want to consolidate functionality while being backwards compatible from an interface perspective.
The key in your case is why you have the two different methods in the same class. Is the intent to have a class which groups together similar types of behavior even if it is implemented through unrelated code, as in aggregation. Or, are you attempting to support the same behavior but have alternative implementations depending on the specifics, which would be a hint for a inheritance/overloading type of solution.
The problem will be whether this class will continue to grow and in what direction. Two methods won't make a difference but if this repeats with more than 3, you will need to decide whether you want to declare it as a facade/adapter or that you need to create child classes for the variations.
Your suspicions are correct but the smell is just the wisp of smoke from a burning ember. You need to keep an eye on it in case it flares up and then you need to make a decision as how you want to quench the fire before it burns out of control.

Linq-to-entities: How to create objects (new Xyz() vs CreateXyz())?

What is the best way of adding a new object in the entity framework. The designer adds all these create methods, but to me it makes more sense to call new on an object. The generated CreateCustomer method e.g. could be called like this:
Customer c = context.CreateCustomer(System.Guid.NewGuid(), "Name"));
context.AddToCustomer(c);
where to me it would make more sense to do:
Customer c = new Customer {
Id = System.Guid.NewGuid(),
Name = "Name"
};
context.AddToCustomer(c);
The latter is much more explicit since the properties that are being set at construction are named. I assume that the designer adds the create methods on purpose. Why should I use those?
As Andrew says (up-voted), it's quite acceptable to use regular constructors. As for why the "Create" methods exist, I believe the intention is to make explicit which properties are required. If you use such methods, you can be assured that you have not forgotten to set any property which will throw an exception when you SaveChanges. However, the code generator for the Entity Framework doesn't quite get this right; it includes server-generated auto increment properties, as well. These are technically "required", but you don't need to specify them.
You can absolutely use the second, more natural way. I'm not even sure of why the first way exists at all.
I guess it has to do with many things. It looks like factory method to me, therefore allowing one point of extension. 2ndly having all this in your constructor is not really best practice, especially when doing a lot of stuff at initialisation. Yes, your question seems reasonable, i even agree with it, however, in terms of object design, it is more practical as they did it.
Regards,
Marius C. (c_marius#msn.com)