I am trying to configure IBM Cloud Devops Continuous Delivery.
I have the Build Stage configured. Build Stage runs fine and passes with a successful build.
However Deploy Stage configured through template at the time of deployment did not run.
Every time when the page is refreshed getting Continuous Delivery service is not found and asking me to deploy a service. I see continuous delivery service is already deployed and configured to run.
Sorry you're running into some trouble here. Provided you have an instance of Continuous Delivery in the same resource group as the toolchain & pipeline, the UI should not be throwing (and enforcing) a "Continuous Delivery service required" message at you. It sounds like you do have an instance of CD, so we're looking into the problem at our end of things.
Related
I am facing a weird problem. I deployed my code using Azure DevOps Pipeline Release - it ran successfully, however, the code is not reflecting in Azure Function App.
Here is what I am doing:
Created a Release pipeline with a "Deploy Azure App Service" task. It picks up artifacts from a build pipeline and is configured to deploy to a Function App using Service Connection
When the Release pipeline is triggered it runs through all its steps, and I get a success (see logs below)
However, when I open up Azure portal and navigate to the Function App, it continues to say "Now it is time to add your code" on the Overview tab, and I am not able to hit my API on that Function App
Surprisingly, On the Deployment Center tab of the Function App, it does show the details of the deployment (See details below)
I can also find the deployed zip file under D:\home\site\wwwroot when I log on to the Kudu console
Deployment logs:
Got service connection details for Azure App Service:'myFuncApp'
Updating App Service Application settings. Data: {"WEBSITE_RUN_FROM_PACKAGE":"1"}
Updated App Service Application settings and Kudu Application settings.
Package deployment using ZIP Deploy initiated.
Successfully deployed web package to App Service.
App Service Application URL: http://myFuncApp.azurewebsites.net
View on Function App Deployment Center Tab:
Deployed Successfully to production
Source Version 6d9c8340ba Build 20190411.1 Release: 3
The Function App endpoint is working, (throwing a generic welcome page) confirming the Function App itself is healthy, but I am not able to access my API.
Additional updates
Here is the structure of the .zip file that is being uploaded to d:\home\data\SitePackages as a part of the zip deploy from Azure Pipelines:
/host.json
/package.json
/proxies.json
/package-lock.json
/func_name/index.js
/func_name/function.json
/node_modules/**
The same code is working locally.
Note: When I go to the Deployment Center tab, I do see this error message, but I think this is related to Continuous deployment through Function App
We were unable to connect to the Azure Pipeline that is connected to this Web App. This could mean it has been removed from the Azure Dev Ops Portal. If this has happened, you can disconnect this pipeline and set up a new deployment pipeline.
Please help me. What can be going wrong?
I was finally able to troubleshoot. #4c74356b41 pointed me in the right direction as the key issue was the package.
Below was the issue:
I had added an archive step in the build pipeline. This was causing the artifact to be zipped before publish
In the release pipeline I was using Azure App Service Deploy task. This internally uses Zip Deploy where App type is set to Azure Functions. Thus, it was zipping my zipped file.
When I remove the archive step, the double zipping was avoided, and it started working.
I have a two microservices applications running in Azure Service fabric cluster. I don't have any issue when I deploy the applications from Visual Studio. But when I try to deploy the applications through Azure DevOps CI/CD pipeline I'm getting the below error.
[error]Found more than one item with search pattern D:\a\r1\a**\drop\projectartifacts**\PublishProfiles\Cloud.xml. There can be only one.
From this error message what I can understand I should have only one Cloud.xml file in the solution.
I would like to know the best practices to create multiple applications in Azure Service Fabric cluster and how to resolve the error.
You have two SF applications in the solution. If you are building both and dropping then on the same folder, you will have two cloud.xml files.
Because you specified a broad search pattern, it will find both.
You didn't tell which task is throwing this exception, I will assume it is the Deploy Service Fabric Application.
To deploy both applications, you should have two steps, one pointing to each application, then you should fix the search pattern to be more specific on which SF App you are deploying.
I have a private gitlab instance with multiple projects and Gitlab CI enabled. The infrastructure is provided by Google Cloud Platform and Gitlab Pipeline Runner is configured in Kubernetes cluster.
This setup works very well for basic pipelines running tests etc. Now I'd like to start with CD and to do that I need some manual acceptance on the pipeline which means the person reviewing it needs to have the access to the current state of the app.
What I'm thinking is having a kubernetes deployment for the pipeline that would be executed once you try to access it (so we don't waste cluster resources) and would be destroyed once the reviewer accepts the pipeline or after some threshold.
So the deployment would be executed in the same cluster as Gitlab Runner (or different?) and would be accessible by unique URI (we're mostly talking about web-server apps) e.g. https://pipeline-58949526.git.mydomain.com
While in theory, it all makes sense to me, I don't really know how to set this up properly.
Does anyone have a similar setup? Is my view on this topic too simple? Let me know!
Thanks
If you want to see how to automate CI/CD with multiple environments on GKE using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.
I have a local VM that has successfully been running builds from VSTS.
After the builds are completed I have a deployment job that pushes my artifacts to a number of servers. This has been working fine for the past few months.
This morning, I received a message that I was out of "deployment minutes" and my deployments failed, however my builds completed.
Obviously it was building locally, but deploying in the cloud.
Is there a way I can configure my agent or my deployment job within VSTS to use my local agent?
You should just be able to switch your release definition to use the same agent queue as your builds. In each environment, click on each phase and change the agent queue it uses from "Hosted" to whatever you named your local agent queue.
You may need to install additional software (Azure PowerShell, if you're deploying to Azure, for example), but it will work the same -- the build and release agents are exactly the same software and use exactly the same tasks.
You should be able to add your agent to the pool (it requires some software installed there of course) and then select it instead of a hosted environment in the deployment step configuration:
Agent pools
At the bottom you can find:
The hosted pool provides all VSTS accounts with a single hosted build agent and a limited number of free build minutes each month. If you need more hosted build resources, or need to run more than one build concurrently, then you can either:
- Deploy your own on-premises build agents
- Buy additional hosted pipelines
Here is where I finally found the option to specify which agent the release should run on:
We're currently doing continuous deployment to our dev/qa servers, and manually triggered automated deployment to our production boxes. Currently we're using TeamCity/PowerShell/MsDeploy. We now have a requirement to deploy to a server that sits on an external network, on which the target server cannot be accessed externally. Instead, it will have to "call home" for updates - and presumably then push the results back if it succeeds or not.
I'm thinking we could write a service that requests a particular URL on our build server with delivers the artifacts that would have been used for deployment, pull that down - and then fire off the build script.
However, I'm not entirely sure how we'd deal with updating the updater, and failures when they occur. Does anyone have any recommendations on how to approach this?
Sounds like you need a release repository. The build server pushes files into it and each deploy job pulls from it. This would neatly decouple the two processes.
A release repository could be as simple as a shared NAS, or something more sophisticated such as the Nexus repository manager.