What are good deployment and release strategies for a shared runtime environment? - deployment

Status Quo
For our customer we are developing some libraries and applications that run as "modules" within a larger application that is delivered as a internal Java Web Start application. The customer maintains the infrastructure that this application is run on. The "server side" is made up of a few web services based on Axis2. Both parts are deployed to a single Tomcat instance as two separate web applications.
When we are releasing new versions of our artifacts, we just produce the necessary JAR files (e.g. ourapp-client.jar and ourapp-server.jar) and send them to our customer who in turn just drops them into the appropriate places and--if need be--restarts the Tomcat server.
Goals
We are currently Maven-izing all of our projects and in the future we also want to do our releases in a "Maven way". The main goal is to automate the release and deployment process on our side and make it less error prone and more reliable and comfortable on the client side.
Main problem
The tricky part is that our customer uses the same Tomcat web applications (Axis2 for the "server side" and the Web Start app) to include their self-developed modules into the application. So we cannot use the obvious solution and just deliver a fresh web app (WAR) that simply gets deployed into the server. That is why we are currently delivering single JAR files that are put "by hand" into the right location.
What strategies do you in general use for delivering your products to your customer? Does someone have had any experiences with a similar situation (i.e. shared runtime environment for 3rd party and self developed applications)?

The main goal is to automate the release and deployment process on our side and make it less error prone and more reliable and comfortable on the client side.
Not sure what you mean exactly by "release and deployment process" (in the maven lingua, this is about SCM tasks automation and deployment of artifacts to a remote repository).
If this is about deployment to production machines, I personally don't think this is really a job for Maven (and we don't use Maven for this). Maybe have a look at dedicated solutions like ControlTier, SmartFrog, Puppet, Chef, etc.
What strategies do you in general use for delivering your products to your customer? Does someone have had any experiences with a similar situation (i.e. shared runtime environment for 3rd party and self developed applications)?
We deliver the things we can control. If a customer wants to include his own bits in a given application, we would make the required subparts available but the packaging would probably be his responsibility.
Related questions
Automated Deployment solution for multiple Java web-apps

Related

DevOps with XPages on Premesis or PaaS like Bluemix

What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?
This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.

Glassfish 3 EJB app deployment advice?

For a variety of unfortunate management reasons (budget constraints etc.) I, the developer, have been put in the position to deploy the app in a production environment. The catch is that I don't have any experience in production EJB application server deployment. That said, they are aware that there are no guarantees of success.
The context:
The dev server runs on the latest version of Netbeans with Glassfish v3, on a mac machine
98% / 99% uptime is ok, there are no financial/critical transactions
It is a client/server EJB 3 app, and the web tier, business tier, and resource tiers currently run on the same machine.
I have the liberty to choose the hw/sw infrastructure
Load estimations: 10 simultaneous connexions avg, rare 200 peaks
The outbound public data is text/small pics (it's for iPhone clients), inbound HTTP text only
Basic maintenance will be taken care of (backup, server reboot, etc)
My questions for production deployment:
What are the must haves infrastructure-wise? Minimum system specs etc?
Is it ok to keep Glassfish v3?
Which configuration aspects of the server should I focus on?
Worst case scenario: if I deploy the same software infrastructure (Netbeans/Glassfish v3) as during the development, would the server keep up?
Any piece of advice would be most welcome. Thanks!
For the architecture, you can start small with just a single GlassFish instance with no front web server (GlassFish has one built in that is very capable). If you can wait for the release of GlassFish 3.1 you'll be able to add instances (clustered or standalone) and offer scalability and centralized admin.
Most production instances of GlassFish I've seen run with 1GB-2GB of JVM heap (-Xmx) but your mileage may vary if you load lots of data in memory or if you use some frameworks. If you want better reliability, having them on separate machines is a plus obviously. With two instances on the same machine you can offer continuity of service if one instance fails (but not if the machine fails).
I'd suggest scripting as much as possible the provisioning of the resources (connection pool , JDBC datasource, etc...) and applications using the "asadmin" command-line tool and try to not use NetBeans on the production platform.
Benchmarking with simulated load sounds like a wise thing to try to put together before going live and this survival guide will probably come in handy.
You don't mention the database. Isn't there one?
I suggest the following:
Not a Mac expert but I'll say go with 6GB or more RAM
HDD space is not a problem these days
Do not know much abt Mac Processors (watever eq of dual core etc)
Personally I have not used GF3 in Production but I hope it's stable now so you should be ok.
System Architecture:
Receive all HTTP requests on some web server (Apache or Sun web server) and load balance with your Glassfish server(s).
Now depending on your physical (or virtual) machines create instance of Glassfish Application Server on each machine. If you just have one machine then create atleast 2 instances of Glassfish. This will help to put one node down for maintaince and other to keep going.
As far as deployment is concern make sure you stop debug logs and fine tune JPA logs etc.
Use Ant or other scripts to deploy code and taking backup of existing code.
I hope this will help to kick start and rest you can ask or solve as you go along.
Good luck.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.

Deployment strategy for distributing application upgrades across large organization

We will be embarking on an Application developement project (.NET 3.5) for a large organization. As we started thinking about the upgrades we would be giving across the machines, we are looking at options like ClickOnce.
What we need is a push model, as long as the client machine is connected to the network, the server can send updates. I believe ClickOnce is a pull model(although by specifying minimum version we can kind of push). Also ClickOnce downloads complete files only, it cannot download the change (byte difference) among the files.
Can anyone point me to a better tool that can be used here. Also better strategies, if any, are welcome, we are in a very early stage of the project.
I don't have a definitive answer on better options, but I've used ClickOnce and can offer some advice.
There are several update options with ClickOnce (before starting, after starting, check every time, check every X Hours/Days/Weeks, etc). You can also throw those out and write code to check for updates. It's not a "push" from the server, but your client could poll for updates which would be the next best thing. Just remember, the application is going to have to restart after the update to see changes.
ClickOnce only downloads changed files. However, the progress dialog always shows the entire size of the application even if it's only downloading a single file. Everyone worries about that, but it's just a bug with the progress dialog.
Finally, I'm a big fan of keeping it simple. It's really easy to over-think these things and create a monstrosity that was never needed. We went through something similar at my company. We were so worried about users downloading unnecessary bytes, we broke our apps up into more, smaller assemblies. It turned into a nightmare; apps were harder to maintain and performed worse on the client. We finally undid it all and wasted weeks just to end up where we started.
I'm not saying you don't need the features you're asking for, I don't know your scenario. Just educate yourself first and know what you're getting yourself into.
We use clickonce at my company (about few hundred users for the app geographically dispersed). By specifying the minimum version we can make sure that every app installation gets updated after deployment automatically. You are right that clickonce downloads full files only but only files that have changed since previous version. If that is still a concern you can break your application into more smaller assemblies. I think you can also use netmodules but then Visual Studio has not built in support for that.
In general clickonce has worked good for us.
I am just in the process of implementing such a service on top of my distributed application platform. In essence I have developed a "push" model for corporates that follows these basic principles:
Software upgrades are "managed" from the server, NOT from the client, which is in line with the deployment of corporate software as opposed to user software (this is a very important point)
Software upgrades can be customised per client application on the server, i.e. the server can deploy unique configurations to every client if required
Software upgrades can be deployed to clients at different times, or all at the same time, or any combination of the two
The software upgrade version can be specified per client, i.e. different versions can be deployed to different clients as required
All software upgrades for all clients can be "managed" from a single server, i.e. the software upgrading "service" is consistent across any application, and all applications can utilise the software upgrading "service"
Clients can implement a software upgrade policy of automatic (application restarts as soon as the upgrade has been downloaded and available at the client), manual (application needs to be "sent" a custom "force upgrade"
message"), or on restart (application upgrades on shutdown if an upgrade has been downloaded and is available)
All auto-upgrading functionality is transparent to any running applications as this is all performed in autonomous background threads and all inter-process communication and file transfer is handled by my framework
In essence this now allows me (or will allow me when I have tidied a few things up and thoroughly tested the implementation) to manage the version of any application developed by me from a central server after it has been initially installed, without any client intervention.