The particular case I am looking at is with a client bundle used in multiple UiBinders. The client bundle is included in a ui:with tag. Is a new client bundle being generate for each one, and if so what are the performance implications?
I can cache the ClientBundle with the #UiField(provided=true) annotation, is this a good idea?
Any caching that needs to be done is done internally - there are static members generated with the ClientBundle implementation itself to ensure that after something is done, it doesnt need to be done again. This applies to ImageResource usage as well as CssResource.ensureInjected.
Use them as makes sense for any object - the compiler will do its best to make the cleanest code possible. Where there is no chance for dynamic dispatch (multiple implementations or subclasses) it will rebuild methods into static methods, if no need for a this reference, it will compile out the references to 'this' entirely.
In short, write readable code, and the compiler will worry about it. If you are concerned, use the excellent CPU and memory profiling tools in Chrome to compare strategies, but I'd be amazed if you saw any difference at all.
Related
How can I release memory resources held by a Scala object? I presume the internals of an Object work much like a Java static class inside the jvm? Is there a way to release these resources, say via using a classloader or other technique?
If you control the code, try to avoid needing to do this. Objects should be assumed to be lazily-created, non-destroyable globals. As such, they really shouldn't hold large or mutable state if it can at all be avoided.
Given your comments, I believe the most reasonable way is to make the executive own the sandbox (i.e. nobody but the executive has direct references to the sandbox) and just create and initialize a new sandbox instead of "refreshing" an existing one. The old sandbox will be garbage-collected and anything it loaded should be as well.
I want to ask that i made a class which has more than 100 static methods. But class is not static, so i want to ask that do all methods live in application memory whole time or not. Or it is bad programming. please suggest me.
Please solve my issue.
Does it have any non-static method also?
if No, class should be made static.
if yes, i'll say design can be improved.
And all methods are loaded into application memory as soon as class is first used. however only one copy of methods are kept in memory.
If you really want these programs available in a reusable context, then write them as regular external C or C++ functions and add them to a library to use in other projects. Then reintroduce or wrap them as needed. I know - it's not a popular answer among objc devs, but at least it scales much better when you begin to have really complex codebases.
But class is not static, so i want to ask that do all methods live in application memory whole time or not.
Yes, and these methods may not be stripped. When you use these in other projects, you pay for everything. With functions, they may be stripped and you only pay for what you actually use in the program. Specifically, the memory of a function or method exists in the binary and in the program's memory only once - instances do not clone methods, they are referenced and looked up using a dispatch table in the runtime (like a vtable). Each instance of the class only accesses its selectors via this table so method count does not make an instance larger. Memory in this case is rarely (if ever) a concern.
Or it is bad programming. please suggest me.
This is very unusual, and an indication that something has gone wrong in the design.
It seems a lot of Objective-C code is using Singleton nowadays.
While a lot of people complaining about Singleton, e.g. Google (Where Have All the Singletons Gone?), their fellow engineers also use it anyway: http://code.google.com/mobile/analytics/docs/iphone/
I know we had some answers in Stack Overflow already but they are not totally specific to Objective-C as a dynamic language: Objective C has categories, while many other languages do not.
So what is your opinion? Do you still use Singleton? If so, how do you make your app more testable?
Updated: I think we need to use codes as example for more concrete discussion, so much discussions on SO are theory based without a single line of code
Let's use the Google Analytics iOS SDK as an example:
// Initialization
[[GANTracker sharedTracker] startTrackerWithAccountID:#"UA-0000000-1"
dispatchPeriod:kGANDispatchPeriodSec
delegate:nil];
// Track page view
[[GANTracker sharedTracker] trackPageview:#"/app_entry_point"
withError:&error];
The beauty of the above code is once you have initialized using the method "startTrackerWithAccountID", you can run method "trackPageview" throughout out your apps without passing through configurations.
If you think Singleton is bad, can you improve the above code?
Much thanked for your input, have a happy Friday.
This post is likely to be downvote-bait, but I don't really understand why singletons get no love. They're perfectly valid, you just have to understand what they're useful for.
In iOS development, you have one and only one instance of the application you currently are. You're only one application, right? You're not two or zero applications, are you? So the framework provides you with a UIApplication singleton through which to get at application-level os and framework features. It models something appropriately to have that be a singleton.
If you've got data fields of which there can and should be only one, and you need to get to them from all over the place in your app, there's totally nothing wrong with modeling that as a singleton too. Creating a singleton as a globals bucket is probably a misuse of the pattern, and I think that's probably what most people object to about them. But if you're modeling something that has "singleness" to it, a singleton might well be the way to go.
Some developers seem to have a fundamental disgust for singletons, but when actually asked why, they mumble something about globals and namespaces and aesthetics. Which I guess I can understand, if you've really resolved once and for all that Singletons are an anti-pattern and to be abhorred in all cases. But you're not thinking anymore, at that point. And the framework design disagrees with you.
I think most developers go through the Singleton phase, where you have everything you need at your fingertips, in a bunch of wonderful Singletons.
Then you discover that unit testing with Singletons can be difficult. You don't actually want to connect to the database, but your Singleton does. Add a layer of redirection and mock it.
Then you discover that unit testing isn't the only time you need different behaviour. You make your Singleton configurable to have different behaviour based on a parameter. You start to wonder if you need to split it into two Singletons. Then your code needs to know which Singleton to use, so you need a Singleton that knows which Singleton to use.
Then some other code starts messing with the values in your Singleton, while you're using it. How dare they! If you wanted just anybody to get at those values from anywhere, you'd make them global...
Once you get to this point, you start wondering if Singletons were the right solution. You start to see the dangers of global data, particularly within an OO design, where you just assume your data won't get poked at by other people.
So you go back and start passing the data along, rather than looking it up (this used to be called good OO design, but now it has a fancy name like "Dependency Injection").
Eventually you learn that Singletons are fine in moderation. You learn to recognize when your Singleton needs to stop being single.
So you get shared objects like UIApplication and NSUserDefaults. Those are good uses of Singletons.
I got burned enough in the Java Singleton craze a decade ago. I don't even consider writing my own Singletons. The only time I've needed anything similar in recent memory is wanting to cache the result of [NSCalendar currentCalendar] (which takes a long time). I created a category on NSCalendar and cached it as a static variable. I felt a bit dirty, but the alternative was painfully slow code.
To summarize and for those who tl;dr:
Singletons are a tool. They're not likely to be the right tool, but you have to discover that for yourself.
Why do you need an answer that is "total Objective C specific"? Singletons aren't totally Obj-C specific either, and you're able to use those. Functions aren't Obj-C-specific, integers aren't Obj-C specific, and yet you're able to use all of those in your Obj-C code.
The obvious replacements for a singleton work in any language.
A singleton is a badly-designed global.
So the simplest replacement is to just make it a regular global, without the silly "one instance only" restriction.
A more thorough solution is, instead of having a globally accessible object at all, pass it as a parameter to the functions that need it.
And finally, you can go for a hybrid solution using a Dependency Injection framework.
The problem with singletons is that they can lead to tight coupling. Let's say you're building an airline booking system: your booking controller might use an
id<FlightsClient>
A common way to obtain it within the controller would be as follows:
_flightsClient = [FlightsClient sharedInstance];
Drawbacks:
It becomes difficult to test a class in isolation.
If you want to change the flight client for another implementation, its necessary to search through the application and swap it out one by one.
If there's a case where the application should use a different implementation (eg OnlineFlightClient, OfflineFlightClient), things get tricky.
A good workaround is to apply the dependency injection design pattern.
Think of dependency injectionas telling an architectural story. When the key actors in your application are pulled up into an assembly, then the application’s configuration is correctly modularized (removing duplication). Having created this script, its now easy to reconfigure or swap one actor for another.”. In this way we need not understand all of a problem at once, its easy to evolve our app’s design as the requirements evolve.
Here's a dependency injection library: https://github.com/typhoon-framework/Typhoon
Looking at Microsoft's Managed Extensibility Framework (MEF) and various IoC containers (such as Unity), I am failing to see when to use one type of solution over the other. More specifically, it seems like MEF handles most IoC type patterns and that an IoC container like Unity would not be as necessary.
Ideally, I would like to see a good use case where an IoC container would be used instead of, or in addition to, MEF.
When boiled down, the main difference is that IoC containers are generally most useful with static dependencies (known at compile-time), and MEF is generally most useful with dynamic dependencies (known only at run-time).
As such, they are both composition engines, but the emphasis is very different for each pattern. Design decisions thus vary wildly, as MEF is optimized around discovery of unknown parts, rather than registrations of known parts.
Think about it this way: if you are developing your entire application, an IoC container is probably best. If you are writing for extensibility, such that 3rd-party developers will be extending your system, MEF is probably best.
Also, the article in #Pavel Nikolov's answer provides some great direction (it is written by Glenn Block, MEF's program manager).
I've been using MEF for a while and the key factor for when we use it instead of IOC products is that we regularly have 3-5 implementations of a given interface sitting in our plugins directory at a given time. Which one of those implementations should be used is actually something that can only be decided at runtime.
MEF is good at letting you do just that. Typically, IOC is geared toward making sure you could swap out, for a cononical example, an IUserRepository based on ORM Product 1 for ORM Product 2 at some point in the future. However, most IOC solutions assume that there will only be one IUserRepository in effect at a given time.
If, however, you need to choose one based on the input data for a given page request, IOC containers are typically at a loss.
As an example, we do our permission checking and our validation via MEF plugins for a big web app I've been working on for a while. Using MEF, we can look at when the record's CreatedOn date and go digging for the validation plugin that was actually in effect when the record was created and run the record BOTH through that plugin AND the validator that's currently in effect and compare the record's validity over time.
This kind of power also lets us define fallthrough overrides for plugins. The apps I'm working on are actually the same codebase deployed for 30+ implementations. So, we've typically go looking for plugins by asking for:
An interface implementation that is specific to the current site and the specific record type in question.
An interface implementation that is specific to the current site, but works with any kind of record.
An interface that works for any site and any record.
That lets us bundle a set of default plugins that will kick in, but only if that specific implementation doesn't override it with customer specific rules.
IOC is a great technology, but really seems to be more about making it easy to code to interfaces instead of concrete implementations. However, swapping those implementations out is more of a project shift kind of event in IOC. In MEF, you take the flexibility of interfaces and concrete implementations and make it a runtime decision between many available options.
I am apologizing for being off-topic. I simply wanted to say that there are 2 flaws that render MEF an unnecessary complication:
it is attribute based which doesn't do any good to helping you figuring out why things work as they do. There's no way to get to the details burred in the internals of the framework to see what exactly is going on there. There is no way to get a tracing log or hook up to the resolving mechanisms and handle unresolved situations manually
it doesn't have any troubleshooting mechanism to figure out the reasons for why some parts get rejected. Despite pointing at a failing part it doesn't tell you why that part has failed.
So I am very disappointed with it. I spent too much time fighting windmills trying to bootstrap a few classes instead of working on the real problems. I convinced there is nothing better than the old-school dependency injection technique when you have full control over what is created, when, and can trace anything in the VS debugger. I wish somebody who advocates MEF presented a bunch of good reasons as to why would I choose it over plain DI.
I agree that MEF can be a fully capable IoC framework. In fact I'm writing an application right now based on using MEF for both extensibility and IoC. I took the generic parts of it and made it into a "framework" and open sourced it as its own framework called SoapBox Core in case people want to see how it works.
In particular, take a look at how the Host works if you want to see MEF in action.
I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.