How do we control application type versions retention? - azure-service-fabric

We want to execute external integration tests and manually call a rollback if something is wrong.
We're using the 'Service Fabric Application Deployment' task in Team Services (VSTS) and it seems to only keep the latest in the cluster.
Cluster --> Applications --> [Application], and then under Essentials. Only one row item is listed which shows the latest version.
Also, attempting Start-ServiceFabricApplicationUpgrade results in 'Application type and version not found.'
How do we alter the behaviour of previous version retention of application types? (And what is the default?)

I don't have the answer to your question, but I do offer this thought:
While I understand there may be a valid use case out there for trying to do this, I think a more accepted approach is to set up a test environment that matches production very closely. Deploy to test and test the heck out of it before approving the deployment to production.
One of the main selling points of Service Fabric is its ability to be so redundant, yet with your proposed workflow you are deploying code to that environment in which you're not entirely confident in. I think that really goes against what Service Fabric offers you.
Since you will be testing it so thoroughly on the Test environment, hopefully anything you end up finding in Production is small enough to be fixed through a patch a few hours later or however fast you can fix it.

Related

DevOps with XPages on Premesis or PaaS like Bluemix

What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?
This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.

dep6500 errors on multi-app deploys

I'm receiving an error that's nearly identical to what's posted in How to solve DEP6500 while deploying a solution with multiple projects to an emulator or Lumia 950xl:
The issue isn't resolved and I've got further detail to add here in the hope that it might resolve & clarify both questions (that question is presently the top hit when I search for DEP6500)
AFAICT, mine really appears to be a VS15 solution deployment configuration issue.
I have a single solution with five projects: three apps, a class library, and a win runtime component:
First trivial app, references #2
Universal class library, references no other project directly, but attempts to use an AppServiceConnection to connect to #4
Second trivial app, just a hello world, references absolutely nothing
Third trivial app, references #5, meant to be used via an AppServiceConnection
Universal windows runtime component, references nothing
Really, what I want to be able to do is start with nothing deployed to the device, select "Deploy Solution", and have all three apps successfully deployed.
Based on the rest of the description below, I'm clearly misconfiguring this solution, but for the life of me I don't see where.
At the solution level, if I configure any combination of two or more of the three apps to deploy, I get the DEP6500 error when I try "Deploy Solution" - actually, two DEP6500 errors when three apps are configured to deploy.
If I configure only one of the three apps to deploy, deploying the solution works just fine.
If I uninstall every deployed app and deploy just #1, as you might expect, it has trouble at runtime when it tries to use #4.
If, instead, I deploy just #1 and then deploy just #4, #1 runs just fine.
As I said earlier, if #1 and #4 are both configured to deploy, deploying the whole solution fails.
The third app, #3, is really uninvolved in this whole mess, I only added it to better characterize the problems deploying #1 and #4.
Seeing as each deploys just fine individually and all three can be deployed to my device at the same time if I deploy them one at at time, how can I configure Visual Studio 15 to deploy all three when I run "Deploy Solution"?
Finally, it would help to find out what type of port the IDE referring to when I produces the error in question:
DEP6500 : A specified communication resource (port) is already in use by another application. 0x89731800
helpful smaller questions may be: that is that port used for? when is it opened? how long is it open? how can I configure VS15 in a way that avoids port collisions between apps during full-solution deploys?
The issue here was that Visual Studio 15 just doesn't appear to be able to time the deployment correctly when you ask it to do a full deploy of every project in one action.
In this case, manually deploying each project one at a time yields a good result.
I suspect the "port" in question has to do with device deployment, as I can do a full deploy to my local machine without issue but see the DEP6500 error on deploys to my Lumia 950XL.
My aim was to be able to do a full deploy from a single VS15 menu option and that's still not technically working consistently, but I suppose I've found a workaround in as much as a piecemeal manual deploy is working everywhere.

Updating Deployments SCCM

I'm super new to SCCM and trying out some stuff.
Atm I create a lot of Applications to deploy on around 50 Clients.
Before I deploy them to all clients I test them on a test Client.
The problem now is that if I change sth in the Deployment Type like the installation command I have to delete the deployment everytime afterwards and deploy it again or the change wont happen on the client when I install the Application next time.
There probaly a way easier method which I can't figure out atm.
So how do i update the changes I made after the Application is allready deployed?
Greetings,
Paxz.
The application deployment command line will only be executed if the application is not detected - i.e. the Application Detection criteria evaluates to false. With this premise, it is possible to change the Application Detection criteria so it evaluates to false... perhaps add an addition rule to include "file1.txt exists"? This should work, but it is ugly and I would not recommend it.
A better approach
I prefer to test my application deployments on VMs in the first instance: prepare the destination machine, snapshot it, then deploy.
If you need to tweak your deployment you can then make the required changes, redistribute the content (if required), then restore the VM's snapshot for a fresh deployment.
I managed to get an answer from microsofts technet forum.
For deployments to know the update in the command line, I just have to push the next policy polling cycle.
This will only be effective for clients that haven't executed the deployment type yet.
Other than that there seems to be no other way than deleting the deployment and re-deploying it for the changes to be known for the deployment.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.