Currently I am working on a project to provide Restful APIs on Azure. We want to deploy the project to both Azure Kubernetes and Service Fabric. Is there any possibility to do that? And how to implement the CI/CD on Azure?
We need to maintain all code login in a single project. Then create a deployment package for both aks and service fabric using the different configuration files/scripts. Or, we may have 2 extra projects in the same solution, 1 for aks, 1 for service fabric.
Those 2 options are all acceptable. Is there any sample or guide?
both aks and service fabric supports microservices deployment orchestration, if you would elaborate where are you facing the challenge, I or someone will be able to help you out in a more better way.
Azure has devops resources, so you can create CICD even easily.
https://azure.microsoft.com/en-in/product-categories/devops/
If you want to create your own CICD structure, there are tools like Jenkins. travis CI, terraform, ansible, chef, etc. that you can start looking in.
Related
I am currently deploying application (Ansible automation platform) on Openshift clusters using helm chart and operators. I would like to have worker nodes in Openshift to run as instance group in Ansible automation platform. For this set up is done. Including the deployment via gitlab CICD pipeline.
However, I would like to have unit test, intergration test and performance test for my deployment.
E.G
whether Correct release and revision of helm chart is deployed
All resources on Openshift is up
Connectivity to controller
Connectivity to gitlab (scm)
Connectivity between execution nodes (might be with API call)
Running a test job template
(preferably including the test steps to be also included in the pipeline stage)
Could you suggest testing options or tools to perform this testing?
Maybe with pros and cons
Thank you
I first though about using Helm hook for checking connectivities between kubernetes resources.
Helm hook seems to provide post install options for the life cycle deployment stage.
I wonder whethere there are other options or this options might have cons.
I would like to ask what people use to provision an ephemeral preview environment in AWS EKS for your service under test. Also in addition, I am curious to know how you provision any dependent services (such as Database).
E.g. I am working on a back-end service and would like to deploy an isolated ephemeral version of this service packaged from my feature branch, including the database. Furthermore, I would also like copy of a front-end service in my isolated environment to test my back-end.
Any thoughts would be appreciated
Thanks
Sachin
You can roll your own solution: by wiring together your own CI/CD (Jenkins, CircleCI, BuildKite, Github Actions, etc) solution to trigger building and deploying of a preview environment by tying in to webhooks on your source repository. This would have to include your building of the modified code, then deploying that code to some staging environments, then of course seeding those environments with some type of data.
There is a bit of nuance to getting this right. You should check out https://ephemeralenvironments.io/ which is a good template of what needs to go in to these environments.
A lot of other folks use services that provide this as a SaaS platform, Shipyard.build, Release, and Velocity.tech are a few of your options.
Disclaimer: I'm on the Operations team at Shipyard
Hope this helps!
I have an application which I want to deploy to a number of VMS on Azure and AWS, I was working with Azure DevOps before and it provided very nice features to achieve this with deployment groups etc. Now I want to work with Github and I am really having problems to design my CI CD pipeline since Github actions do not have any feature which could be used to do deployment on a set of VMS. If there are please guys share your thoughts any article would be appreciated. Thanks
You can firstly consider to deploy application to one Virtual Machine with Github Actions.
Just in the environment of Azure, all you need is to use GitHub Action to build a virtual machine (VM) within Azure.
you can learn the detailed steps to deploy application to one Virtual Machine with Github Actions in:How to use GitHub Actions to deploy an Azure Virtual Machine.
For multi-environment deployments either in Azure or AWS with GitHub Actions, I recommend you to use Octopus Deploy as a reference. you can still refer to Multi-environment deployments with GitHub Actions and Octopus to deploy Virtual Machine on AWS.
For
deploy application to multiple Vms
We recommend you to use Azure Batch to run parallel workloads. It can allow you to deploy application to multiple Vms at one time in batch in the basic of deploy application to one Virtual Machine.
You can run the batch job using Azure CLI by following the example: Run Batch job with the Azure CLI.
I have a two microservices applications running in Azure Service fabric cluster. I don't have any issue when I deploy the applications from Visual Studio. But when I try to deploy the applications through Azure DevOps CI/CD pipeline I'm getting the below error.
[error]Found more than one item with search pattern D:\a\r1\a**\drop\projectartifacts**\PublishProfiles\Cloud.xml. There can be only one.
From this error message what I can understand I should have only one Cloud.xml file in the solution.
I would like to know the best practices to create multiple applications in Azure Service Fabric cluster and how to resolve the error.
You have two SF applications in the solution. If you are building both and dropping then on the same folder, you will have two cloud.xml files.
Because you specified a broad search pattern, it will find both.
You didn't tell which task is throwing this exception, I will assume it is the Deploy Service Fabric Application.
To deploy both applications, you should have two steps, one pointing to each application, then you should fix the search pattern to be more specific on which SF App you are deploying.
I have a private gitlab instance with multiple projects and Gitlab CI enabled. The infrastructure is provided by Google Cloud Platform and Gitlab Pipeline Runner is configured in Kubernetes cluster.
This setup works very well for basic pipelines running tests etc. Now I'd like to start with CD and to do that I need some manual acceptance on the pipeline which means the person reviewing it needs to have the access to the current state of the app.
What I'm thinking is having a kubernetes deployment for the pipeline that would be executed once you try to access it (so we don't waste cluster resources) and would be destroyed once the reviewer accepts the pipeline or after some threshold.
So the deployment would be executed in the same cluster as Gitlab Runner (or different?) and would be accessible by unique URI (we're mostly talking about web-server apps) e.g. https://pipeline-58949526.git.mydomain.com
While in theory, it all makes sense to me, I don't really know how to set this up properly.
Does anyone have a similar setup? Is my view on this topic too simple? Let me know!
Thanks
If you want to see how to automate CI/CD with multiple environments on GKE using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.