Dont understand the concept of extends in URL.openConnection() in JAVA - eclipse

Hi I am trying to learn JAVA deeply and so I am digging into the JDK source code in the following lines:
URL url = new URL("http://www.google.com");
URLConnection tmpConn = url.openConnection();
I attached the source code and set the breakpoint at the second line and stepped into the code. I can see the code flow is: URL.openConnection() -> sun.net.www.protocol.http.Handler.openConnection()
I have two questions about this
First In URL.openConnection() the code is:
public URLConnection openConnection() throws java.io.IOException {
return handler.openConnection(this);
}
handler is an object of URLStreamHandler, define as blow
transient URLStreamHandler handler;
But URLStreamHandler is a abstract class and method openConnection() is not implement in it so when handler calls this method, it should go to find a subclass who implement this method, right? But there are a lot classes who implement this methods in sun.net.www.protocol (like http.Hanlder, ftp.Handler ) How should the code know which "openConnection" method it should call? In this example, this handler.openConnection() will go into http.Handler and it is correct. (if I set the url as ftp://www.google.com, it will go into ftp.Handler) I cannot understand the mechanism.
second. I have attached the source code so I can step into the JDK and see the variables but for many classes like sun.net.www.protocol.http.Handler, there are not source code in src.zip. I googled this class and there is source code online I can get but why they did not put it (and many other classes) in the src.zip? Where can I find a comprehensive version of source code?
Thanks!

First the easy part:
... I googled this class and there is source code online I can get but why they did not put it (and many other classes) in the src.zip?
Two reasons:
In the old days when the Java code base was proprietary, this was treated as secret-ish ... and not included in the src.zip. When they relicensed Java 6 under the GPL, they didn't bother to change this. (Don't know why. Ask Oracle.)
Because any code in the sun.* tree is officially "an implementation detail subject to change without notice". If they provided the code directly, it helps customers to ignore that advice. That could lead to more friction / bad press when customer code breaks as a result on an unannounced change to sun.* code.
Where can I find a comprehensive version of source code?
You can find it in the OpenJDK 6 / 7 / 8 repositories and associated download bundles:
http://hg.openjdk.java.net/jdk6/jdk6 - http://download.java.net/openjdk/jdk6/
http://hg.openjdk.java.net/jdk7/jdk7 - http://download.java.net/openjdk/jdk7/
http://hg.openjdk.java.net/jdk8/jdk8
Now for the part about "learning Java deeply".
First, I think you are probably going about this learning in a "suboptimal" fashion. Rather than reading the Java class library, I think you should be reading books on java and design patterns and writing code for yourself.
To the specifics:
But URLStreamHandler is a abstract class and method openConnection() is not implement in it so when handler calls this method, it should go to find a subclass who implement this method, right?
At the point that the handler calls than method, it is calling it on an instance of the subclass. So finding the right method is handled by the JVM ... just like any other polymorphic dispatch.
The tricky part is how you got the instance of the sun.net.www.protocol.* handler class. And that happens something like this:
When a URL object is created, it calls getURLStreamHandler(protocol) to obtain a handler instance.
The code for this method looks to see if the handler instance for the protocol already exists and returns that if it does.
Otherwise, it sees if a protocol handler factory exists, and if it does it uses that to create the handler instance. (The protocol handler factory object can be set by an application.)
Otherwise, searches a configurable list of Java packages to find a class whose FQN is package + "." + protocol + "." + "Handler", loads it, and uses reflection to create an instance. (Configuration is via a System property.)
The reference to handler is stored in the URL's handler field, and the URL construction continues.
So, later on, when you call openConnection() on the URL object, the method uses the Handler instance that is specific to the protocol of the URL to create the connection object.
The purpose of this complicated process is to support URL connections for an open-ended set of protocols, to allow applications to provide handlers for new protocols, and to substitute their own handlers for existing protocols, both statically and dynamically. (And the code is more complicated than I've described above because it has to cope with multiple threads.)
This is making use of a number of design patterns (Caches, Adapters, Factory Objects, and so on) together with Java specific stuff such as the system properties and reflection. But if you haven't read about and understood those design patterns, etcetera, you are unlikely to recognize them, and as a result you are likely to find the code totally bamboozling. Hence my advice above: learn the basics first!!

Take a look at URL.java. openConnection uses the URLStreamHandler that was previously set in the URL object itself.
The constructor calls getURLStreamHandler, which generates a class name dynamically and loads, and the instantiates, the appropriate class with the class loader.

But URLStreamHandler is a abstract class and method openConnection()
is not implement in it so when handler calls this method, it should go
to find a subclass who implement this method, right?
It has to be declared or abstract or implemented in URLStreamHandler. If you then give an instance of a class that extends URLStreamHandler with type URLStreamHandler and call the openConnection() method, it will call the one you have overriden in the instance of the class that extends URLStreamHandler if any, if none it will try to call the one in URLStreamHandler if implemented and else it will probably throw an exception or something.

Related

Is it possible to implement a module that is not a WPF module (a standard class library, no screens)?

I am developing a modular WPF application with Prism in .Net Core 5.0 (using MVVM, DryIoc) and I would like to have a module that is not a WPF module, i.e., a module with functionality that can be used by any other module. I don't want any project reference, because I want to keep the loosely coupled idea of the modules.
My first question is: is it conceptually correct? Or is it mandatory that a module has a screen? I guess it should be ok.
The second and more important (for me) is, what would be the best way to create the instance?
This is the project (I know I should review the names in this project):
HotfixSearcher is the main class, the one I need to get instantiated. In this class, for example, I subscribe to some events.
And this is the class that implements the IModule interface (the module class):
namespace SearchHotfix.Library
{
public class HotfixSearcherModule : IModule
{
public HotfixSearcherModule()
{
}
public void OnInitialized(IContainerProvider containerProvider)
{
//Create Searcher instance
var searcher = containerProvider.Resolve<IHotfixSearcher>();
}
public void RegisterTypes(IContainerRegistry containerRegistry)
{
containerRegistry.RegisterSingleton<IHotfixSearcher, HotfixSearcher>();
}
}
}
That is the only way I found to get the class instantiated, but I am not a hundred per cent comfortable with creating an instance that is not used, I think it does not make much sense.
For modules that have screens, the instances get created when navigating to them using the RequestNavigate method:
_regionManager.RequestNavigate(RegionNames.ContentRegion, "ContentView");
But since this is only a library with no screens, I can't find any other way to get this instantiated.
According to Prism documentation, subscribing to an event shoud be enough but I tried doing that from within my main class HotfixSearcher but it does not work (breakpoints on constructor or on the event handler of the event to which I subscribe are never hit).
When I do this way, instead, the instance is created, I hit the constructor breakpoint, and obviously the instance is subscribed to the event since it is done in the constructor.
To sum up, is there a way to get rid of that var searcher = containerProvider.Resolve<IHotfixSearcher>(); and a better way to achieve this?
Thanks in advance!
Or is it mandatory that a module has a screen?
No, of course not, modules have nothing to do with views or view models. They are just a set of registrations with the container.
what would be the best way to create the instance?
Let the container do the work. Normally, you have (at least) one assembly that only contains public interfaces (and the associated enums), but no modules. You reference that from the module and register the module's implementations of the relevant interfaces withing the module's Initialize method. Some other module (or the main app) can then have classes that get the interfaces as constructor parameters, and the container will resolve (i.e. create) the concrete types registered in the module, although they are internal or even private and completely unknown outside the module.
This is as loose a coupling as it gets if you don't want to sacrifice strong typing.
is there a way to get rid of that var searcher = containerProvider.Resolve<IHotfixSearcher>(); and a better way to achieve this?
You can skip the var searcher = part :-) But if the HotfixSearcher is never injected anywhere, it won't be created unless you do it yourself. OnInitialized is the perfect spot for this, because it runs after all modules had their chance to RegisterTypes so all dependencies should be registered.
If HotfixSearcher is not meant to be injected, you can also drop IHotfixSearcher and resolve HotfixSearcher directly:
public void OnInitialized(IContainerProvider containerProvider)
{
containerProvider.Resolve<HotfixSearcher>();
}
I am not a hundred per cent comfortable with creating an instance that is not used, I think it does not make much sense.
It is used, I suppose, although not through calling one of its methods. It's used by sending it an event. That's just fine. Think of it like Task.Run - it's fine for the task to exist in seeming isolation, too.

Any way to trigger creation of a list of all classes in a hierarchy in Swift 4?

Edit: So far it looks like the answer to my question is, "You can't do that in Swift." I currently have a solution whereby the subclass names are listed in an array and I loop around and instantiate them to trigger the process I'm describing below. If this is the best that can be done, I'll switch it to a plist so that least it's externally defined. Another option would be to scan a directory and load all files found, then I would just need to make sure the compiler output for certain classes is put into that directory...
I'm looking for a way to do something that I've done in C++ a few times. Essentially, I want to build a series of concrete classes that implement a particular protocol, and I want to those classes to automatically register themselves such that I can obtain a list of all such classes. It's a classic Prototype pattern (see GoF book) with a twist.
Here's my approach in C++; perhaps you can give me some ideas for how to do this in Swift 4? (This code is grossly simplified, but it should demonstrate the technique.)
class Base {
private:
static set<Base*> allClasses;
Base(Base &); // never defined
protected:
Base() {
allClasses.put(this);
}
public:
static set<Base*> getAllClasses();
virtual Base* clone() = 0;
};
As you can see, every time a subclass is instantiated, a pointer to the object will be added to the static Base::allClasses by the base class constructor.
This means every class inherited from Base can follow a simple pattern and it will be registered in Base::allClasses. My application can then retrieve the list of registered objects and manipulate them as required (clone new ones, call getter/setter methods, etc).
class Derived: public Base {
private:
static Derived global; // force default constructor call
Derived() {
// initialize the properties...
}
Derived(Derived &d) {
// whatever is needed for cloning...
}
public:
virtual Derived* clone() {
return new Derived(this);
}
};
My main application can retrieve the list of objects and use it to create new objects of classes that it knows nothing about. The base class could have a getName() method that the application uses to populate a menu; now the menu automatically updates when new subclasses are created with no code changes anywhere else in the application. This is a very powerful pattern in terms of producing extensible, loosely coupled code...
I want to do something similar in Swift. However, it looks like Swift is similar to Java, in that it has some kind of runtime loader and the subclasses in this scheme (such as Derived) are not loaded because they're never referenced. And if they're not loaded, then the global variable never triggers the constructor call and the object isn't registered with the base class. Breakpoints in the subclass constructor shows that it's not being invoked.
Is there a way to do the above? My goal is to be able to add a new subclass and have the application automatically pick up the fact that the class exists without me having to edit a plist file or doing anything other than writing the code and building the app.
Thanks for reading this far — I'm sure this is a bit of a tricky question to comprehend (I've had difficulty in the past explaining it!).
I'm answering my own question; maybe it'll help someone else.
My goal is to auto initialize subclasses such that they can register with a central authority and allow the application to retrieve a list of all such classes. As I put in my edited question, above, there doesn't appear to be a way to do this in Swift. I have confirmed this now.
I've tried a bunch of different techniques and nothing seems to work. My goal was to be able to add a .swift file with a class in it and rebuild, and have everything automagically know about the new class. I will be doing this a little differently, though.
I now plan to put all subclasses that need to be initialized this way into a particular directory in my application bundle, then my AppDelegate (or similar class) will be responsible for invoking a method that scans the directory using the filenames as the class names, and instantiating each one, thus building the list of "registered" subclasses.
When I have this working, I'll come back and post the code here (or in a GitHub project and link to it).
Same boat. So far the solution I've found is to list classes manually, but not as an array of strings (which is error-prone). An a array of classes such as this does the job:
class AClass {
class var subclasses: [AClass.Type] {
return [BClass.self, CClass.self, DClass.self]
}
}
As a bonus, this approach allows me to handle trees of classes, simply by overriding subclasses in each subclass.

Understanding Zend_Controller_Request_Abstract and other core Zend classes

I am digging deep into the Zend framework and at this point I am a little confused. I am particularly checking out the Zend_Controller_Action (*_Action), Zend_Controller_Request_HTTP(*_HTTP) and Zend_Controller_Request_Abstract (*_Abstract).
The *_Abstract class as it's name suggests is an abstract class hence cannot be instantiated and mostly provides method stubs along with a few final implementations. The actual implementation is in *_HTTP and *_Simple classes that subclass *_Abstract. Fair enough.
Now I am looking at the *_Action class, right here: http://framework.zend.com/apidoc/1.0/Zend_Controller/Zend_Controller_Action.html
Taking a look at $_request variable, it states that it is an instance of type *_Abstract. At this point, I am confused since I do not know why it should be of type *_Abstract and not *_Http since one cannot technically have an instance of an abstract class.
So my question:
Why is an instance of an abstract class being declared here.
Moving on, I want to override the $_request classes's getParams() method since this is how our application retrieves all parameters and I would like to apply some common sanitization and blacklisting rules to all of our input right here.
Unfortunately, when in my BaseController (the Main Controller that is subclassed by all other controllers) I declare something to the effect of :
$_request = new RequestClass(); //RequestClass subclasses Zend_Controller_Request_Http and overrides getParams()
my application does not launch itself the way it ought (I get a blank screen).
For those more curious, RequestClass() getParams() does nothing fancy but:
getParams()
{
$params = parent::getParams();
//sanitization rules over $params;
return $params;
}
The type hint of Zend_Controller_Request_Abstract effectively means the request must be an instance of a class that extends Zend_Controller_Request_Abstract.
Unless you're delibrately not using ZF's routing, you'll probably be better off doing your sanitation of parameters through the routes. Otherwise, if you're getting a blank screen, that means display_errors is turned off and the PHP error or exception is being logged instead. Check your web server error log to see what the actual problem is.

How to use GWT SerializationStreamFactory

I am trying to serialize a object in GWT using SerializationFactory, but I am not able to get it working. Here is the sample code of my POC:
import com.google.gwt.user.client.rpc.SerializationException;
import com.google.gwt.user.client.rpc.SerializationStreamFactory;
import com.google.gwt.user.client.rpc.SerializationStreamReader;
import com.google.gwt.user.client.rpc.SerializationStreamWriter;
...........
Some code here....
.........
......
SerializationStreamFactory factory = (SerializationStreamFactory) GWT.create(MyClass.class);
SerializationStreamWriter writer = factory.createStreamWriter();
try {
writer.writeObject(new MyClass("anirudh"));
String value = writer.toString();
SerializationStreamReader reader = factory.createStreamReader(value);
MyClass myObj = (MyClass) reader.readObject();
System.out.println(myObj.getName());
} catch (SerializationException e) {
e.printStackTrace();
}
It gave me the following exception
Caused by: java.lang.RuntimeException: Deferred binding failed for 'com.anirudh..client.MyClass' (did you forget to inherit a required module?)
also in my code the class whose object I am trying to serialize implements IsSerializable
MyClass implements IsSerializable
I don't want to use GWT Auto-Bean framework because it does not fit my use case. Also I am not using GWT-RPC framework and right now I am quite adamant about using SerializationStreamFactory :D because I seriously want to know how this thing works.
Can anyone share a working example of SerializationStreamFactory or help me out pointing any mistake(s) I did.
Thanks in advance
SerializationStreamFactory factory = (SerializationStreamFactory) GWT.create(MyClass.class);
What are you expecting this line to do? GWT will attempt to find a replace-with or generate-with rule that matches this class (either when-type-assignable or when-type-is), or failing that will attempt to invoke a zero-arg constructor on MyClass, effectively new MyClass(). Is this what you are expecting?
The selected exception you've pasted suggests that MyClass may not be on the source path that GWT has been given to compile from, but the full error log will provide more information.
It looks as though you are trying to mimic the generated RPC code, where a *Async rpc interface would be implemented by code that extends from com.google.gwt.user.client.rpc.impl.RemoteServiceProxy (which implements SerializationStreamFactory). That base implementation is extended further to initialize several fields such as the com.google.gwt.user.client.rpc.impl.Serializer instance, actually responsible for serializing and deserializing object streams.
Serializers are created (by default) from the base class of com.google.gwt.user.client.rpc.impl.SerializerBase, through the rebind class com.google.gwt.user.rebind.rpc.TypeSerializerCreator. If you've build your own generator for MyClass, you should be kicking this off to get the work done as ProxyCreator already should be doing.
Remember when building your own serialization/deserialization mechanism that you need to decide which types can be marshalled within this system - if you open it to all types, then you will need to generate FieldSerializer types for all possible objects on the source path. This will greatly expand the size of your compiled code.
If your main goal is learning how this 'magic' works, dig into the generators and associated code that live in the com.google.gwt.user.rebind.rpc package. There are other libraries that leverage these ideas such as the gwt-atmosphere project (see https://github.com/Atmosphere/atmosphere to get started). Also review the generated code that GWT creates when it builds a 'tradition' RPC interface.

ServiceContainer, IoC, and disposable objects

I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.
One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().
(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.
You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.