Domino 8.5.3 - Create an organization extension library / codestore - eclipse

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information

I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.

To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.

I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Related

ColdFusion Builder 3 vs. Dreamweaver & local and remote paths

I can't wrap my head around how I'm supposed to use ColdFusion Builder 3 (akin to Eclipse).
Up until this point, I've been using Dreamweaver 5, which is getting 'long-in-the-tooth', and I wanted to give CF Builder a try.
So, in Dreamweaver, it's pretty simple: you setup connections to servers using credentials... There's a Local path, which is the local copy of your code, and the webroot of the Server which is the 'live' copy of your code. Basically, you make a change to the local copy, and PUT the change to the Server. Easy peasy lemon squeezy, right?
But, how does this translate to ColdFusion Builder 3?
Just to give you an idea of our infrastructure.... we have Development and Production. Each of these boxes has multiple web instances, example: Accounting, Human Resources, IT. Each of those web instances could have multiple applications.... I'm only considered about my instance, IT, on both the Production and Development servers.
Is a workspace supposed to represent an instance on a web server?
In CFBuilder, should I configure 1 server per web app?
Is a project supposed to represent a web app?
Am I supposed to use drive mappings to the inetpub wwwroot for access to web applications? Is it even considered kosher to have a drive mapping to the web root? \server\c$\inetpub\wwwroot
Where do I keep my local copy of my code?
How do I move items from Development to Production?
My main confusion is with workspaces, projects, and servers... My intent is to debug and 'view page in browser' from CFBuilder.... However, when you setup a server, under Server Mapping and URL Prefix, you're supposed to indicate the Local and Remote paths, plus this is not directly related to the physical location of the project.... and as I've mentioned, there's multiple instances, multiple applications, and the development box is not my local machine, it's a remote server...
I would really like to know how others have made this work for them.
I really don't mind this question even though it's not directly code related because I've been using ColdFusion Builder (CFB) for years and there just isn't enough good documentation out there. I now enjoy a great experience with CFB thanks to blog posts and sharing experiences with other devs :)
My setup: CFB3 running on Windows 8.1, dev server running on a Virtual Machine so it is treated as "remote server" just like yours. I also update remote staging and production servers (although not directly from CFB).
First, let's set some reasonable expectations: Dreamweaver and CFB are very different in that CFB focuses on programming and Dreamweaver on design. CFB is built on eclipse and therefore has the advantage of benefiting from most eclipse plugins.
Your question is specifically about how to set up your projects in CFB using 2 remote servers (dev and prod). It's different for everyone but I'll share my setup with you. (sidenote: My projects are also stored in Git repositories - 1 repo for every app)
Starting from the top: A workspace in CFB deals with your whole eclipse application, not just your projects. The most important things kept in this directory are snippets and plugins. You do NOT need to keep your project files in here. This is merely the main directory where all of your settings are kept. You are not required to have more than 1 workspace (I only have one). Why would you need more than one? You may be multifaceted programmer who needs to keep separate workspaces using separate tools (like different plugins, snippets, window layouts...)
To answer your next question (1 server per web app), all you need to to is configure your dev servers in the "CF Servers" tab. You need to add 1 server per web instance for every instance that you'd like to test on. Hopefully, your dev server has RDS enabled (very helpful for remote database and file viewing, just like in Dreamweaver). During configuration, don't worry about Mappings or Virtual Host Settings (I have another recommendation later). Once configured, you'll be able to assign that server to a project.
Drive mappings: I would never recommend mapping to the webroot of a shared dev server. If you were to use that drive map as your local directory, your changes will be made directly to the development server. What you want to do is create a new project by right clicking in the Navigator area and select Import > Other > FTP. Follow the steps, choose anywhere on your local drive to store the files, then choose "New project" at the end (this will add the .project file necessary for CFB to control the project).
Once the project is created, right click on it, select ColdFusion Project and choose the CFML Dictionary version you'll be using (CF11, 10, 9...). Then, select ColdFusion Server Settings and choose the dev server. This is necessary for testing.
What you now have is a local directory with your app and eclipse knows about the remote server. In order to synchronize, you right click on the project, go to Team and synchronize from there. For detailed information about synchronization over FTP, see the help section "Guide to WebDAV and FTP".
Moving to production is not as simple as it was in Dreamweaver. The FTP configuration information only allows for 1 connection (thus giving you a list of files synchronized between your project and the dev server). Therefore, you'll need a third party FTP client to synchronize between your local project and your prod server.
As promised, my last entry will be able the "debugging" which is why I said to skip the mappings and virtual host settings in CF Server config. I really, really recommend using a third party paid plugin called FusionDebug (http://www.fusion-debug.com/). This plugin facilitates the setup and allows you to step-into all of your code (which doesn't work so well in native CFB). There's a 30 day trial and I recommend you try before your buy (or license for a year in this case!)

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.

Eclipse FTP "Team Plugin"

It seems to me from the following page
http://www.eclipse.org/eclipse/platform-team/target.php
and other pages like it. that there has been a robust, mature, properly integrated (and seemingly very smart) way to upload and synchronize files from an eclipse workspace with an FTP site for four to six years already.
It seams from that page and similar pages around the web that "The WebDAV and FTP plugins are built as part of the platform build" which to me in plain english would mean that if you have the core eclipse files you should already have it.
This is obviously not the case. you dont even have the plugins required to make basic ftp access unless you install it from a repository.
Not only is it not installed. it is not available from the default plugin repositries set up when you download eclipse.
Not only that but I could not even find the link to the repository anywhere.
RSE - which seams like a library out of the nineties with such limited functionality that shell clients writen for windows 3.1 could do more in less steps - has its plugin repository url posted in many places. but the team plugin has only links to CVS repositories of the source at best. even most of those are broken.
In conclusion.
Does anyone know how to install the "team" ftp client so that I can synchronize my content with FTP?
Well, here is an sftp plugin on sourceforge.
It is only about internal plugin defining a basic container for any ftp or webdav-based application.
You can see, when looking at:
the source of eclipse.ftp, that this is mainly an exception, some interfaces and a basic FTP container.
the sources of the target.ftp plugin, that that feature is here from more than 4 years, untouched, with basic functions only.
Only a more advance client like eclipse.team.ftp defines a client, but not on eclipse.team.ftp no more, since this is now the DSDP Target Management component which actually has developed a more advanced FTP/Webdav layer. It took over since 2006.
Ok, time for me to answer my own question.
No RSE does not do what org.eclipse.team.ftp was able to accomplish five years ago. Somehow that functionality never made it in. I have no idea why they dropped a perfectly logical solution in favor of the new backwards methodology.
However, visiting the site that #bmargulies recommended I realized that Jcraft still hosts the original FTP and WebDav Service for Team Services (upon which their sftp service is built)
So you can just point your eclipse install dialog at http://eclipse.jcraft.com/ and install the original plugin.
Good luck
I am also looking for this. Any solution for you yet?
This one also has been left dead and then excluded. :((
Team Synchronization on top of RSE
I guess there are still not enough Eclipse developers interested in it. Especially if one continue to see not only dead projects but even succesfull ftp-sync projects being shut out.
Really sad.

How do you manage your project life cycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How do you manage your project life cycle?
For example: Do you start with a template? Do you use versioning such as SVN as the authoritative source? Do you archive the projects, if so when and how? When a project is revived (work resumes), how’s that handled? Do you use automated scripts to do things such as create IIS sites, DBs, archive, launch, etc?
Of particular interest is management of many projects at varying points of development.
Development: We do not start with a template, because the world changes quick enough to make template maintenance a full time job. We do encourage everybody to use the same IDE (Eclipse), so that they can help eachother with their environments.
Project Management: We are using GForge to manage our projects. Sourceforge is slightly better, but GForge is much cheaper and has a different licensing fee model. GForge incorporates CVS, SVN, Document storage, Issue trackers and integrates everything nicely. This makes it easy to see where the project is at. Open issues, and closed issues with connected code changes, everything is integrated.
Versioning: Although we tried SVN, we switched back to CVS because it fits our needs better and works fine.
Backups: Our GForge server, housing all our projects and sourcecode, is running on a VMWare EX server. Backups are done daily on VM level and we make VM snapshots if we feel that we need more frequent restore points for some reason.
Reviving projects: This is very common in our business. Every project has all it's libraries and build requirements in CVS. The project always has an up-to-date development manual which describes all the steps to get a development environment running, and has a chapter with all the things which are not default, to pay attention to. We try to build software in an as-default-as-possible environment so that developers don't have to spend days tweaking their settings.
Nearly all projects are built using Maven, which also makes life easy for our developers. Ususally reviving a projects only takes a few steps:
Download eclipse
Connect to CVS over SSH (extssh is built into Eclipse)
Check out the project (default "Check Out" option)
Run "Maven Eclipse" and refresh Eclipse
Run unittests in Eclipse to see if everything is working.
Builds: All our projects are built on a seperate build server. Every morning the build server does a complete build and tags CVS if all unittests succeed. During the day, hourly builds are made and when there are failures the team automatically gets an email. Usually we use one build server per project, and it is a simple luntbuid server (Linux, Tomcat, Luntbuild).
Both the buildserver and sometimes even the developer machines are VM's. This makes reviving a project really easy. Get the VM from the fileserver, start it up, and you're good to go.
The build server creates daily sites which show unittest coverage statistics, complexity measurements, CVS activity and developer activity (who changed what and when).
All our software comes with self-building database scripts built in. Point the config file to the database, start the software, and it figures out what it needs to do to the database itself. This really comes in handy because the buildserver can just start the software. No special steps needed. Our customers are also happy, they never need to worry about their database, or upgrade scripts.
The whole project lifecycle is managed, documented and tracked in GForge, with the addition of some external spreadsheets for budget tracking because that's simply easier.
Wether you have an integrated project server or not, I think it is really important to have a system. This enables you to switch developers between projects without them getting lost. It saves time. Particularly when a customer comes back to you after 2 or 3 years for modifications on old software (yes, that happens).
All the stuff we use is open source (you can even use an open source fork of GForge). It's not in the tools, it's how you use them.
It would depend on the nature of the work. When working at home for private clients, I start by opening a folder for the client with a bunch of standard documents, which I customize, such as contracts, invoices, reports, requirements, testing, code repository, etc. As the project develops, I add/modify the directory as required.
If I had to go back to a project, I would reopen that directory, and for any non-common components, create a new directory. For example, if my client had a web application built, and now they need a second application, I would use the same directories for invoices and contracts and create new directories for the code base, requirements and testing.
In terms of backup, I archive the work at any point where I've reached a milestone, with the exception of code, which I back up daily at a minimum. At the end of each project, when I close a contract, I take the entire directory and compress it and store it on a remote server.
i create folders containning the project stages like "initialize software process" on were we placed docs like the bussiness proposal, we use another one for requirements another for the construction,releases,one for meetings minutes, and so on.
We keep those under a subversion repository but it really depends on what metodology you are using, also it depends on how do you handle the configuration management and how organize you want to be. and yes we use template for most of our artifacts so we assure in some way the quality of our products.
As for source code, we have it all in a Subversion repository. After each release, we make a branch - new features only get added to the current branch (on which the next release will be based), critical bug fixes are done in the current and the old branch (so we can deliver hotfixes for the version the customers currently have).
As for all documents belonging to a release - from the planning & resource sheets to specifications, testcases, user and technical manuals for the software we create etc. - we store them in a Sharepoint portal site. The advantages of this Sharepoint site is that users have access via a website (so no need to grant management access to your repository ;-), you can finely control the access rights, and you can turn on versioning. We also use tagging to mark whether a document belongs to a specific release (e.g. service pack xy) or product, or whether it is generally valid.
Concerning scripts, we use several to perform e.g. nightly build plus unit tests (we usually do that for the last and the current release), but also to deploy the complete software solution (including IIS site creation, database data model upgrade,...) on out test servers. These are nant scripts using lots of variables for paths, version numbers etc. so it is very easy to copy and modify them for a new release.

How do you apply patches on a web project at production server?

We recently had a project where we released beta of a big web app on our client's server. Our client requested us to do bug fixes as they come, and we tried to do it same way. Normally while building an app on our prototype server is way easier, as I just have to issue simple 'svn up' command which takes a second.
But on production environment, we do not have any version control tool available. Is it possible to automate the patching work, so that we need not to login to ftp and upload each a every file one by one?
Its very difficult to work this way. As I'm having this problem, its for sure that some of you have already solved the problem. Please share your solutions.
Looking forward to your replies... Thanks a lot for reading guys.
Depending on the tools available on the server, you could either do a svn diff -r x:y where x is the revision you last updated too and y the last revision you want to update to (probably the last revision on your repository) to generate a patch and then apply the patch with the patch command.
If rsync is available on the production platform, and you can use it (though ssh for instance) you could set up a production ready tree, rsync it on the production server, and when an update comes in, svn update your production tree, and rsync it again.
What is stopping you from installing a Subversion client on the production server?
[EDIT] So someone doesn't allow you to install the software you need on the server. The question is: What is more important? A stable production server or an arbitrary policy? If the someone doesn't listen to arguments, go to your computer, start MS Word and write this letter:
"I hereby refuse to accept any responsibility for the stability of our production system based on the fact that [insert name here] refuses to equip me with the tools to make sure that the production system contains all the necessary files and data after an installation."
Sign this, have your boss sign it and then send a copy to [insert name here]. All of a sudden, any problem that might arise after an installation will be on his turf. Or to put it more clearly: He will be responsible for any mistake you might make.
Now, all you have to do is wait. :)
Depends on the programming environment you use. In Smalltalk and the web application server like Aida/Web we can upgrade the live web applications on the fly, without stopping it.
The server is connected to the SCM of choice like Monticello for Squeak Smalltalk or Store for VisualWorks. New versions are then manually or automatically loaded to the server's Smalltalk image.