As a long-time J2EE developer, I have always been curious as to why NetBeans uses(i.e. forces you to use) the Tomcat Manager app to deploy while Eclipse seems perfectly happy/able to deploy without the manager app? Though I have googled this exhaustively over the years, I have never found even the beginning to an answer. Perhaps this is nothing more than how each product started and has never changed.
Does anyone have any insight or educated theories they would be willing to share?
[Edit] Sigh... to address shekhar's comment, I see that it is not absolutely clear that I am referring ONLY to using Tomcat. I mistakenly assumed that the title and context of my question was sufficient, but again, I am specifically referring to using Tomcat as the Servlet Container with these IDEs. Thanks.
[Edit] I don't know who down-voted this but I have researched this for a long time and found zero reason for it. As for down-voting because it might not be useful, I think that is in the eye of the beholder; also, it usefulness can only be determined based on the answer which is why I am asking.
Sounds like a good topic for Quora but anyway...
I can only speak about NetBeans. It originally used a patched version of Tomcat 3. (early NetBeans 3.x releases). Tomcat Manager was added in Tomcat 4 and it was used because it was possible to integrate easily with your Tomcat installation without knowing much details about their setup. Start/stop Tomcat can use default scripts and will pick up your settings. Deploy does not need to care about access rights and it just assumes that manager works.
Related
This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.
I'm looking for a neat way to deploy and manage bundles on our Virgo container, but also want to ensure that should we want to move from Virgo in a few years, we're not heavily tied to it. We're using Maven, so get OBR for free, which could save us some work having to maintain a list of the dependency chains.
With that in mind, after having read this article;- How to deploy OSGi apps and dependencies? and some the Virgo 3.5 docs, I'm slightly at odds about the best approach.
The Virgo docs suggest using the plan mechanism, but this ties our deployment descriptors to Virgo (not what I'm after). The article suggests I can use OBR through the GoGo console, which now ships as standard with Virgo. However, when trying to use this console to manage OBR, all I get is
osgi> repos add /home/fuzzy/.m2/repository/repository.xml
No repository admin service available
I've done some more hunting through the Virgo docs, but can't find anything in reference to OBR - only bug reports suggesting that some of the OBR commands have been left in the GoGo shell, inappropriately.
I've also written to the Virgo forum, but no-one seems to really want to help there. Before I go down the route of tying us to Virgo plans, I thought I'd have a quick go here.
Any help, greatly appreciated! Thanks in advance.
As suggested, downloaded and installed org.apache.felix.bundlerepository-1.6.6.jar - however, get exactly the same error. Asked the same question of the Virgo user group, and the answer that came back is that OBR is not supported. Maybe I'm missing something here, but there's very little information about on this topic. If you know better - please update this thread for the sake of others!
The message is quite clear - you need a repository admin service. Felix provides an implementation (download Bundle Repository).
I use the Hot Swap java debugging feature with web app on Tomcat. After some class signature change, I got "Hot Code Replace Fail" Eclipse dialog - I understand that.
What I want in such case is to republish the application (I can do that) and work with the newly deployed code. However the debugger stil complains, until I restart the server. Because other apps and long startup I don't want that.
Is there a way how to tell to the debugger, that there is the new class version already reloaded in a new webapp classloader and that it is save to continue?
Thanks.
Why don't you try with JRebel?
JRebel is a JVM Java Agent that integrates with application servers, making classes reloadable with existing class loaders. Only changed classes are recompiled and instantly reloaded in the running application.
JRebel plugs into IDEs and build systems. Classes and static resources are loaded straight from the workspace.
http://zeroturnaround.com/software/jrebel/
Regards,
Andrea
There is a old PHD project. The guy who made it was brought by Oracle but his work didnt made it to the Java 8 and hopefully will be seen in Java 9 but is more likely to be in Java 10. There is a new Version of this for Java 8 I guess. I havent tried it yet.
My Original Question for additional information: Advanced Code Hot Swapping in JDK 8?
And the Project page on Github: https://github.com/dcevm/dcevm
With this you can hot replace almost any class change freeing you from restarting the JDK ever after. (beside sideeffects for static objets and singletons but that would be logical)
Did you try Server Options - Serve modules without publishing along with Publishing?
Also not sure what you are saying about other apps.
I need to use some EJBs which are deployed on JBoss version 4.x from another EJB deployed on JBoss version 3.2.x. Is this possible?
I ask because I have a third party application which uses some strange bridge's to do that and don't know why (though I haven't try to do this on my own).
This is unfortunately not possible. One of the major drawbacks of remote EJBs is that there is nothing in the specification that guarantees or even suggests any kind of interoperability between different vendors or between different EJB versions from the same vendor.
In practice I found that at least with JBoss AS it never works. Even minor upgrades break binary compatibility completely. There have been some very hacky attempts with special class loaders that are only been given access to the client libs of the target JBoss AS, but this is very tricky to get right.
I guess this "strange bridge" you are talking about is using such a trick. Kudos to whoever build that bridge for getting this to work at all.
See this topic I started on the JBoss community forum for some more details: http://community.jboss.org/message/587180
It seems to me from the following page
http://www.eclipse.org/eclipse/platform-team/target.php
and other pages like it. that there has been a robust, mature, properly integrated (and seemingly very smart) way to upload and synchronize files from an eclipse workspace with an FTP site for four to six years already.
It seams from that page and similar pages around the web that "The WebDAV and FTP plugins are built as part of the platform build" which to me in plain english would mean that if you have the core eclipse files you should already have it.
This is obviously not the case. you dont even have the plugins required to make basic ftp access unless you install it from a repository.
Not only is it not installed. it is not available from the default plugin repositries set up when you download eclipse.
Not only that but I could not even find the link to the repository anywhere.
RSE - which seams like a library out of the nineties with such limited functionality that shell clients writen for windows 3.1 could do more in less steps - has its plugin repository url posted in many places. but the team plugin has only links to CVS repositories of the source at best. even most of those are broken.
In conclusion.
Does anyone know how to install the "team" ftp client so that I can synchronize my content with FTP?
Well, here is an sftp plugin on sourceforge.
It is only about internal plugin defining a basic container for any ftp or webdav-based application.
You can see, when looking at:
the source of eclipse.ftp, that this is mainly an exception, some interfaces and a basic FTP container.
the sources of the target.ftp plugin, that that feature is here from more than 4 years, untouched, with basic functions only.
Only a more advance client like eclipse.team.ftp defines a client, but not on eclipse.team.ftp no more, since this is now the DSDP Target Management component which actually has developed a more advanced FTP/Webdav layer. It took over since 2006.
Ok, time for me to answer my own question.
No RSE does not do what org.eclipse.team.ftp was able to accomplish five years ago. Somehow that functionality never made it in. I have no idea why they dropped a perfectly logical solution in favor of the new backwards methodology.
However, visiting the site that #bmargulies recommended I realized that Jcraft still hosts the original FTP and WebDav Service for Team Services (upon which their sftp service is built)
So you can just point your eclipse install dialog at http://eclipse.jcraft.com/ and install the original plugin.
Good luck
I am also looking for this. Any solution for you yet?
This one also has been left dead and then excluded. :((
Team Synchronization on top of RSE
I guess there are still not enough Eclipse developers interested in it. Especially if one continue to see not only dead projects but even succesfull ftp-sync projects being shut out.
Really sad.