CI/CD for Dell Boomi - deployment

Is there any possibility of applying DevOps concepts to build and Deploy Dell Boomi Processes.
Dell Boomi recommends usage of Atmosphere for change management activities, it also says its a UI based tool and we cannot export the code to a source control but have the following questions
Do we have any option to automate deployments across environments?
What kind of testing can be performed and at what stages?

As far as I know there is a possibility to create a cross reference table with many columns including names of the process execution id, dates, action and so on and use it as a tool to deploy many processes from one place.
For the moment i was reading about it there was an issue because many developers who had an access to deployment could override values which made a lot of chaos.
Instead you can use 3rd party tool like jenkins.
https://www.youtube.com/watch?v=mf5MsQtEa5o
above example Boomi with Jenkins

You can use the AtomSphere API to automate deployments. In effect, you'll use a series of API calls to tell AtomSphere to automate the deployment of a given version of a package to an environment. This step is safe to repeat in many environments, and this way you can deploy to an arbitrary number of environments.
At a strategy level, testing Boomi is like any other integration tool. You should have unit tests, integration tests and systems tests. There are a range of techniques for each. Dell have some wiki pages that talk through unit testing approaches, specifically.

Related

SSRS 2008 R2 to SP integrated deployment scripting

I've gone down a rabbit hole and finding it hard to know where to go now.
I have a report project I'm trying to script up for deployment to a SharePoint site on a highly controlled production server. On my Dev box I can just deploy my project from BIDS and the reports run. If I upload my rdls, datasets and datasource to the document library directly they don't. I've done some digging and found that the uploaded files aren't linked in any way and that BIDS does some extra steps to set the DataSource for the Shared Datasets and then sets the reference to these DataSets on the Rdls.
So I've been poking around and can see that I need to call the SetItemReferences on ReportingServices2010.asmx to define the links but I'm lost using Powershell. Some scripts I've found are focussed on setting DataSources so I'm trying to adapt that using bits from other scripts but getting lost. One example does $Reference = New-Object -TypeName SSRS.ReportingService2010.ItemReference but I don't know where they're getting the SSRS. namespace from.
Incidentally, the structure I have is:
- One Shared DataSource points to a SharePoint List
- One DataSet pointing to the shared DataSource
- Four reports with NO embedded DataSources and five embedded DataSet references each pointing to the shared DataSet applying various filters.
Is there already a built in way to do this so I can avoid hassles?
Requirements here are that I need something extremely simple that doesn't require extra PowerShell modules to be installed (if possible). The network is highly controlled and it's difficult enough to get scripts we run ourselves approed let alone some third party module installed on the farm of machines in Prod. Basically it will take at least six months to scan, test and formally approve any addons but if we write a very simple script it's much easier.
Yes - deploy with your browser. I have written 3 separate report projects with SSRS on a highly controlled SharePoint 2010 production environment. Each one of them, I have deployed using the browser.
Deploying using your browser is simpler than the PowerShell. Follow the general steps outlined in the last part of this thread. Doing it via Powershell is possible, but far more difficult task.
If the admins have this production environment so highly controlled, then there should exist a parallel staging environment that is kept in precise configuration as production and available to you in order to do DevOps of your SSRS reports. You should request to test your install on the staging environment in order to work out your deployment issues (either by browser or PowerShell). If you get denied this request, then you need to request again. Otherwise its impossible to get it perfect if you don't have access to develop on a similar system.
DevOps on these reports is the last mile of the race and can be difficult if you are the first to do it at your organization. You can do it, just keep going and your reports will be installed. Keep good notes so when you development future reports you can repeat this process and will be the go-to person for getting it done in the future. Don't lose faith.

DevOps with XPages on Premesis or PaaS like Bluemix

What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?
This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.

TFS Intranet Automated Deploy Strategy

I have introduced branching/merging to my team and have talked before about how it would be great to automatically build and deploy code checked into the staging/master branches, but I'm a junior dev, not very ops-y.
The trouble I'm having, is that we create intranet applications and store them on our own VM's which we have access to, but we also have load balancing which is causing me grief!
I can get a build to automate (well, I haven't got all the bugs figured out but I'm working my way through them) - and I can even get the build to automatically create a zip file ready for deployment.
Is it possible to configure several servers for deployment?
I.E
1) I check in some code to stage
***Automatically***
2) Code builds
3) Build completes, Unit tests run and they complete
4) Code is packaged into a .zip
5) .Zip is deployed across the three load balancing servers (all with the same file path).
***
Maybe worth noting we currently have our TFS server running Visual Studio so the code is built on the same server it is all stored, but this is not the server we run live code from.
Any help or tutorials specific to my setup would be GREATLY appreciated, I really want to turn this departments releasing strategies around!
I am going to address only the deployment aspect. There are a lot of different ways that this can be handled, such as:
Customizing the build template
Writing custom .Net code and inserting it into the build template (which would also involve customizing the template)
Creating a Batch or Powershell script set to run after the build completes
Using a separate tool such as OctoDeploy or Release Manager to handle the deployments
The first thing you need to do is separate the build and deployment steps in your head. While they are tightly coupled in your model, they are two totally different tasks that need to be handled different ways.
The second thing is to stop thinking like a developer when it comes to the deployment portion. While there will likely be a programmatic solution, you'll need to identify the manual steps first.
You stated that you're not very ops-y, by which I assume you mean you're more Developer and not Systems Analyst. If that is the case, then the third thing you'll need to do is get someone who is involved, such as your current release team.
There are 3 major things that need to be done then:
EVERYTHING needs to be standardized. If you can't standardize something, then standardize the way that it's non-standard (example: You have a bulk list of servers you need to deploy to, and you need to figure out which ones to deploy to based on their name, which can be anything. In that case, a rule needs to be put in place that all QA servers need to have QA in their name, User Acceptance servers need UAT, Production need PROD, etc.).
Figure out how you're going to communicate from the build to the deployment, which builds are going to deployed, to which servers, and where the code is going to be picked up from
You need to document every manual step, and every exception to those steps, and every exception to those exceptions.
Once you have all those pieces in place, you need to then go through each manual step and automate it, whether that's through Batch, Powershell, or a custom-built application. Once you have all the steps automated, you'll have both the build and deploy pieces complete.
After you're able to execute a single "manual" automatic deployment to a single environment, you're then ready to figure out how you want to run it for multiple environments. This can be as complex as an XML file that is iterated through, to simply calling the same command multiple times with different parameters.
A quick summary of how I've done this at my current job (where using a third-party deployment tool was not an option):
Created a tool using .Net WinForms to allow us to "manually" run automated builds (We use the interface to determine the input parameters, and the custom classes under the hood do all the heavy lifting. These custom classes are in a separate project that builds to their own dll. This also allows us to test tweaks and changes to the process in a testing environment before we roll it out to our production build server)
Set up an XML file for each set of environment (QA, UAT, Prod, etc.) that contains all of the servers that need to be deployed to in that environment, including destination paths, scheduled tasks, and Windows Services
Customize the TFS build template and include the custom classes created for the custom tool, which will read the XML file and iterate through each server entry to perform the deployments
I'm more than happy to help with more specific examples and assistance, I look at things a bit different than most people and it helps when it comes to release management.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.