DevOps with XPages on Premesis or PaaS like Bluemix - version-control

What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?

This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.

Related

What's the anatomy of a Bluemix/Cloud Foundry node red project?

There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.

Automatically Combine/Bundle EF Migration(s)

I'm wondering if it is possible to run some automated tasks either on a Release (web deploy) action, or Branch Merge (TFS) action?
Ideally I would like to set up a process that will automatically combine EF migrations since the last release. I'm still looking into how I would automate this, but I think the first step is hooking into a suitable event.
I haven't setup a build server yet, but I'm guessing if the above isn't possible then this would be an option for attaching a custom procedure to the MSBuild task?
Alternatively, if anyone has experience in automating things like this I would be happy to hear it. I am the head of development at a web development company and I would like to facilitate our current processes by automating some of our standard procedures, and this is something we do over any over again for each development!
I appreciate your time looking at my question, thanks.
VSTS and TFS2015 both support a CI/CD process via their new build and release system. Very flexible and powerful. Check it out!
https://msdn.microsoft.com/Library/vs/alm/Release/getting-started/understand-rm
VS/WebDeploy does support deploying EF migrations with a web application:
https://msdn.microsoft.com/en-us/library/dd394698?f=255&MSPPError=-2147217396#efcfmigrations
This works fine for deploying a small application/system but when you want to deploy a larger system with many components it doesn't work as well. We create MSDeploy packages for each component of the system. For example, this is how we deploy SQL databases:
http://dotnetcatch.chief7.space/2016/02/10/deploying-a-database-project-with-msdeploy/

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.

how can I set up a continuous deployment with TFSBuild for MVC app?

I have some questions around the best mechanism to deploy MVC web applications to different environments. Previously I used setup projects (.msi's) but as these have been discontinued in VS2012 I am looking to move to an alternative.
Let me explain my current setup. I currently have a CI setup using TFSBuild 2010 with Team Foundation Server for source control.
A number of developers work on their local machines and check in to the TFS Server. We regularly deploy to a single server dev environment and a load balanced qa environment with 2 servers. Our current process includes installing an msi which carries out some of the following custom actions:
brings current app offline with the app_offline.htm file
run in database scripts (from database project in the solution)
modifies web.config (different for each web server of qa)
labels the code
warmup each deployed file via http request
etc
This is the current process. Now I would like to make some changes. Firstly, I need alternative to msi's. From som research I believe that web deploy via IIS and using MsDeploy is the best alternative. I can use web config transforms for web config modifications. Is this correct and if so, could I get an outline of what I need to do?
Secondly I want to set up continuous delivery via TFSBuild, I have no idea how this may be achieved, would it be possible to get an outline of how it can be integrated in to my current setup? Rather than check in driven, I would like it to be user driven following check in. Also, would it be possible for this to also run in database scripts from a database project in the solution.
Finally, there is also a production environment, but I would like to manually deploy this - can my process also produce an artifact that I can manually install?
Vishal Joshi has some information on his blog that is reasonably good, http://vishaljoshi.blogspot.com/2010/11/team-build-web-deployment-web-deploy-vs.html. It does have the downside that your deployment password is include in the properties you pass to msbuild.
Syed Hashimi has also posted some information on this in another questions Team Build: Publish locally using MSDeploy.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.