We are working on MTA project and using the SAP Cloud SDK pipeline (Piper) to integrate CI/CD for Cloud Foundry.
The project has multiple microservices based on SAP Cloud SDK for Java and each of these microservices has individual sets of integration tests (based on Java project template for SAP Cloud SDK).
We want to do MTA packaging of the microservices but problem arises when we run the mta build with the SAP Cloud SDK pipeline.
The pipeline errors out as it now (i.e. the new structure) expects integration-test module on the project root level but in our case integration tests are at the individual microservice level.
We understand integration tests for all the microservices need to be clubbed at the project root level for mta build but is there any other recommended approach to handle the integration tests at the individual microservice itself?
Appreciate if someone can guide us here to understand the new structure and the best practice to follow.
Just to add as an information - we run maven build using s4sdk pipeline for each of the aforementioned microservices. We want to club these microservices into MTA package for release deployment.
Let me say a few additional words:
The structure of MTA projects suitable for using SAP Cloud SDK Pipeline is documented here.
It is easier to start such a project based on the SAP Cloud Platform Business Application SAP Web IDE Template than to start with one of the SAP Cloud SDK archetypes.
I think that there is a slight contradiction between the idea of microservices and one mta project, because microservices should be deployable in isolation. That said, using mta to structure your application is perfectly fine, but in this case having the integration tests in the project root as documented and as you already discovered is the only way. You can use packages to separate logical entities of your integration tests.
The alternative if your code should be more strictly separated would be to have multiple repos (might use mta or not) and to setup a pipeline for each of those repos.
I hope this helps.
I need a devOpts deployments tool that can do the following tasks:
Creating a project that can contain different queues or artifacts within a GUI application
Create Queues and artifacts via a GUI accessible to non WMQ experts within the DEV environment only (without relying on a script)
Once tested properly, having the tool deploy the project (more specifically the queues contained in the project) from DEV to INT via a GUI that is only accessible by the Deployment team
Does anyone have any suggestions of tools that can perform this other than using scripts
(I have heard mixed reviews about IBM uDeploy, that configurations can not be reused, etc)
There are lots of 3rd party MQ Administration/Configuration tools available for IBM MQ. You can find a list of them here.
What is the best way to achieve DevOps with XPages.
Multiple Developers working as a team, On Premises Servers [Dev, QA,
Prod] can we replicate to Bluemix? Source Control Automated Testing UI
/ Application, Unit testing business logic with testing framework, Automated Deployment
IDE/Tools
Domino Designer; are there other ways?
Note: Use of Views when the data is in a NSF, otherwise data is in the cloud, or SQL. No Forms or other classic Notes design elements.
What are your approaches to this?
This is a high level overview of the topics required to attempt what you're describing. I'm breezing past lots of details, so please search them out; I've tried to reference what I'm currently aware of as far as supporting documentation and blog posts, etc. of others. If anyone has anything good to add, I'm happy to add it in.
There are several components involved with what you're describing, generally amounting to:
scm workflow
building the app (NSF)
deploying the built app to a Domino server
Everything else, such as release workflow through a QA/QC environment, is secondary to the primary steps above. I'll outline what I'm currently doing with, attempting to highlight where I'm working on improving the process.
1. SCM Workflow
This can be incredibly opinionated and will depend a lot on how your team does/wants to use source control with your deployment / release process. Below I'll touch on performing tests, conceptually, during/around the build step.
I've switched from a fairly generic scm server implementation to a GitLab instance. Even running a CE instance is pretty fantastic with their CI runner capabilities. Previously, I had a Jenkins CI instance performing about the same tasks, but had to bake more "workflow" into the Jenkins task, whereas now most of that logic is in a unified script, referenced from a config file (.gitlab-ci.yml). This is similar to how a Travis CI or other similar CI config file works.
This config calls some additional helper work, but ultimately revolves around an adapted version of Egor Margineanu's PowerShell script which invokes the headless DDE build task.
2. Building an NSF from Source
I've blogged about my general build process, with my previous Jenkins CI implementation. I followed the blogging of Cameron Gregor and Martin Pradny for this. Ultimately, you need to:
configure a Windows environment with Domino Designer
set up Domino Designer to import from ODP (disable export), ensuring Build Automatically is enabled
the notes.ini will need a flag of DESIGNER_AUTO_ENABLED=true
the Jenkins CI or GitLab CI runner (or other) will need to run as the logged in user, not a Windows service; this allows it to invoke the "headless dde" command correctly, since it runs in the background as opposed to a true headless invocation
ensure that Domino Designer can start without prompting for a user's password
My blog post covers additional topics such as flagging the build as success or failure, by scanning the output logs for being marked as a failed build. It also touches on how to submit the code to a SonarQube instance.
Ref: IBM Notes/Domino App Dev Wiki page on headless designer
Testing
Any additional testing or other workflow considerations (e.g.- QA/QC approval) should go around the build phase, depending on how you set up your SCM workflow. A lot of the implementation will revolve around the specifics of your setup. A general idea is to allow/prevent deployment based on the outcome of the build + test phase.
Bluemix Concerns
IBM Bluemix, the only PaaS that runs IBM XPages applications, will require some additional consideration, such as:
their Git deploy process will only accept a pre-built NSF
the NSF must be signed by the account owner's Bluemix ID
Ref:
- IBM XPages on Bluemix
- Bluemix Docs: Building XPages apps for the Bluemix Runtime
3. Deploy
To Bluemix
If you're looking to deploy an XPages app to run on Bluemix, you would want to either ensure your headless build runs with the Bluemix ID, or is at least signed with it, and then deploy it for a production push either via a git connection or the cf/bluemix command line utility. Bluemix's receive hooks handle all the rest of the deployment concerns, such as starting/stopping the server instance, etc.
To On-Premise Server
A user ID with appropriate level credentials needs to perform the work of either performing a design replace/refresh or stopping a dev/test/staging server, performing the file copy of the .nsf, then starting it back up. I've heard rumors of Cameron Gregor making use of a plugin to Domino Designer to perform the operations needed for OSGi plugin development, which sounds pretty useful. As most of my Domino application development is almost purely NSF based, I'm focusing more on an approach of deploying to a staging/dev/test server, which I can then perform a design task on to do the needed refresh/replace; closer to the "normal" Domino way of doing things.
Summary
Again, there are a lot of moving pieces involved here, some of which gets rather opinionated rather quickly. For example, I'm currently virtualizing my build machine, so I can spool up a couple virtual machines of it, allowing for more than one build at a time. If there are major gaps in the process, let me know and I'll fill it what I can.
I want to create/update the websites/cloud services in Azure in C#. My objective is to deploy the website/cloud service in Azure without any user intervention.
Can anyone please help me to resolve below queries?
Can we manage Azure websites/cloud services using C# code? If yes then how (any library/api/nuget package)?
If it is not possible in C#, then what are other options to achieve this? I read WebDeploy(MsDeploy), powershell can do this work but I am not sure which one is best in this scenario and how to use them.
This completely depends on your scenario. If you have got a system to run your powershell script from, this might be a good option (https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/) You could also use the cross platform command line tools to script your deployment / web app creation. There are different other options, especially for continous deployment to a web app. You can for example connect your github repo to an existing web app and deploy from that repository.
The C# library you were looking for should be this one:
https://github.com/Azure/azure-sdk-for-net/tree/master/src/ResourceManagement/WebSite
On a project Im working on we have a pretty advanced command-line interface to build, test, package and deploy software.
Now we want to use jenkins as a front-end to this CLI and we want to be able to generate job configurations. We want the interface simple, the user only supply a couple of parameters and jenkins will then query our CLI and generate the needed build steps.
Simple use case:
Create new domain-specific-job
Select Product
Jenkins now queries the CLI and updates the next drop-down with the products different brances.
Select branch
Jenkins generates the build steps by querying the build steps
As Im new to plugin development in Jenkins (and jenkins overall) I would love to get some tips and pointers to where to start.
There are several plugins to generate jobs in Jenkins, there is also the Jenkins CLI.
I think the better approach would be to use the Jenkins CLI to generate Jobs from the outside by developing a cool GUI tool integrating both CLIs nicely.
Here is an example: http://tikalk.com/alm/using-jenkins-cli-job-gen
If you really require to do it in the Jenkins front-end then here are some pointers on possible plugins:
job DSL plugin https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
jobcopy builder plugin https://wiki.jenkins-ci.org/display/JENKINS/Jobcopy+Builder+plugin
job generator plugin https://wiki.jenkins-ci.org/display/JENKINS/Job+Generator+Plugin
Take a look at jenkins-job-builder from jenkins, it could help you to abstract your work and use command line to create the jenkins jobs.