I'm using self-hosted agent in Azure Pipelines and I installed Terraform 0.13 there. When I use Terraform tasks in Azure Devops, as commandOptions I chose '-plugin-dir=/usr/local/bin/.terraform.d/plugins' to skip plugin downloading. Unfortunately, Terraform downloads it to artifact and makes it much heavier than it should be. Also next stage (deployment stage) uses only plugins from artifact, not from our agent.
We do not have much space on our virtual machine that's why we want to avoid unnecessary downloads.
In addition, we defined .terraformrc in home directory with plugin directory. Also we added environment variable as written there:
https://www.terraform.io/docs/commands/cli-config.html#provider-installation
Thank you in advance!
You can try to set -get-plugins=false option.
-get-plugins=false — Skips plugin installation. Terraform will use plugins installed in the user plugins directory, and any plugins already installed for the current working directory. If the installed plugins aren't sufficient for the configuration, init fails.
This is stated in this document.
Eventually I did it another way - I used Cache task from Azure Pipelines. Here's a solution from ITNext:
https://itnext.io/infrastructure-as-code-iac-with-terraform-azure-devops-f8cd022a3341
This is how I described Cache task:
Describe keys well and choose right path.
Related
I have an Azure Repo with a pipeline that calls a script when triggered. The script needs a few dependencies to perform the work. Is there a way to have the dependencies by default to avoid having to install them every time the script is triggered?
If you want to avoid installing dependencies each time pipeline runs, you need to build your own self-hosted agent
From https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser
Self-hosted agents give you more control to install dependent software
needed for your builds and deployments. Also, machine-level caches and
configuration persist from run to run, which can boost speed.
I am trying to create a release pipeline in Azure DevOps. We already have a functioning build pipeline that works well, it is able to package the build with VSBuild and publish it as an artifact. Then in the release pipeline I am using an IIS Deployment job (which includes IIS Manage and IIS Deploy tasks) and it gets that artifact to deploy.
The problem is that we already have a publish profile (.pubxml) that should take care of pretty much everything the IIS Deployment is doing (at least as far I as I understand it). So to me it seems I have two options that don't require me to refactor the project configuration itself.
I can try to mimic the settings on the IIS Deployment job to match our .pubxml as closely as possible and manually applying any changes that aren't doable through the task settings. Obviously this is not ideal as that would require us to update both when ever we make changes and it introduces a large chance of the pipeline breaking down over time.
I can scrap the idea of using IIS Deployment and just use a VSBuild task that uses arguments /p:DeployOnBuild=true /p:PublishProfile=Staging. This doesn't seem like best practices because it means my release pipeline isn't passing a build package to deploy, it is just creating a new one at each stage.
So is there a better option that would allow me to utilize the package I created with VSBuild and the .pubxml configuration together in a deploy? If that isn't possible then are either of my options the "correct" way to handle my situation or am I just missing another method of deployment I could use?
Thank you for any help or insight you can provide. Please let me know if there is any more information I can give that would be useful.
You can try using publish settings file (*.publishsettings) for your IIS deployment.
A publish settings file (.publishsettings) is different than a publishing profile (.pubxml) created in Visual Studio. A publish settings file is created by IIS or Azure App Service, or it can be manually created, and then it can be imported into Visual Studio.
To view more details, you can see:
Publish an application to IIS by importing publish settings in Visual Studio
Deploy your app to a folder, IIS, Azure, or another destination
So unfortunately there doesn't seem to be a way I can achieve everything I wanted in this. The publish profiles are required for when we build the project so without making changes to how we configure those I need to build the project whenever I want to deploy. Ultimately I went with option #2. I essentially just copied most of the build tasks used in the testing pipeline and placed those in the release pipeline with a few modified commands to actually deploy the build once finished. It all seems to work just fine but still doesn't feel like best practices. If I am missing something please let me know and I will make updates as appropriate.
We have a release definition which delivers a bunch of asp.net core services along with an Angular app.
Most service are not updated very often so the question is how to compare an artifact version with already deployed into an environment and skip if the latest version had been deployed before?
We have multiple environments in the pipeline.
I dont think it is possible, at least natively, you can calculate file hashes and dont deploy if they match, another option would be using path triggers to filter when an app is build. for example, your directory structure looks like this:
root
|--app1
|--app2
etc
you can define path filters in your yaml build like this:
trigger:
paths:
include:
- app1/*
- sharedlibs/* (if you have them)
this way build will only trigger if there are any changes to files in those directories
You can add additional release environment to check current artifact version through PowerShell (e.g. Build.SourceVersion, check variables in release), then fail task if there was already successfully released.
For Staging environment, choose After environment option and select previous environment.
On the other hand, since you have mentioned most service are not updated very often, you could use 4c74356b41's suggestion to filter build, to only build and release the changes you want.
We are building packages for multiple deployment environments using TeamCity server and OctoPack. The problem is that tentacle agent chooses the latest by number version of the package, so it's the same (latest) package that is deployed on all environments. Here's the summary of our setup:
Environments DEV and STAGE;
Deployment to DEV is triggered from Git "dev" branch;
Deployment to STAGE is triggered from Git "stage" branch;
OctoPack is configured to generate packages MyProduct.1.0.0.dev-%build_counter% for DEV build configuration;
OctoPack is configured to generated packages MyProduct.1.0.0.%build_counter% for STAGE build configuration;
TeamCity is configured to expose OctoPack artefacts (NuGet packages) via its NuGet feed;
Octopus project is configured to deploy packages with NuGet Id MyProduct from TeamCity NuGet feed.
So what happens is that since DEV builds are run more frequently, they have larger %build_counter%, and STAGE doesn't get a chance to get a deployment of its own packages - Octopus tentacle preferes packages with 1.0.0.dev-* suffix.
This must be fairly common scenario, but I haven't found a simple way to solve it.
There are some parts that are not documented here: https://github.com/OctopusDeploy/Octopus-Tools. But if you look at https://github.com/OctopusDeploy/Octopus-Tools/blob/master/source/OctopusTools/Commands/CreateReleaseCommand.cs it is possible to figure out what you can do.
I think the tools is backward compatible, but not 100 % sure about that.
When you are using the octo tools, which I expect that you use, you can set the version (also called releasenumber now) option to specify the release number. If you doesn't specify anything else it will take the latest package so what you want to do is set the packageversion (also called defaultpackageversion now) that should be used for the release.
I think that should do it. If it doesn't, what are you using to create the release?
Example of what we are using from our TeamCity when using octo tools which we have added to the environment path on the build agents:
create-release --server=%conf.OctoServerApi% --project=%conf.OctoProject% --version=%env.OctopusPackageVersion% --deployto=%conf.OctoDeployEnv% --packageversion=%env.OctoPackPackageVersion% --apiKey=%conf.OctoApiKey% --waitfordeployment %conf.OctoExtraParams%
UPDATE:
The documentation for 2.0 is much better: http://docs.octopusdeploy.com/pages/viewpage.action?pageId=360596
Inspired by Tomas Jansson's answer, simply adding the following to Additional command line arguments in the OctopusDeploy: Create release build step (TeamCity v9) worked for me:
--packageversion=%build.number%
We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.