How to configure Travis job's before_deploy/after_deploy steps to run only for one of the deploy providers? - deployment

I would like to define a before_deploy and an after_deploy step in my travis build that runs only for one of the two providers used in my deploy step. The before/after steps currently run once for each of the providers, but the actions apply only to one of them.
If there's no way to configure the .travis.yml file to do this explicitly, is there some way i could pass information along from my deploy step to the after_deploy step so that I could check to see which provider it is being run for?
Note that the two deploy providers that I'm using are bintray and releases, so there seems to be very little flexibility in what I can do as part of the actual deploy step (i.e. I'm not deploying via a script which would give me more freedom to do extra stuff).

Related

Why choose github action when we can just run bash script in github workflow?

Just completed a GitHub workflow using more of them are actions, but also with one bash script.
When writing the workflow, it seems much quicker use bash script than actions.(since some actions are just do one thing. ) Why are the some reasons that we just need GitHub actions rather than bash script or python script trigger?
Or we are just supposed to use script languages for most part, then use GitHub actions for small portion of the whole workflow?
Interesting but not easy to answer with more information about what your goal is. The right answer might depend on your use case.
I have not used GitHub actions yet. Let me try to explain it anyway, starting pretty high level. Unfortunately, there's no option to add a table of contents ;) Please let me know if this helps.
1. What are GitHub Actions for?
From this "What is GitHub Actions? Benefits and examples" PDF file
GitHub Actions is a CI/CD tool for the GitHub flow. You can use it to integrate and deploy code changes to a third-party cloud application platform as well as test, track, and manage code changes. GitHub Actions also supports third-party CI/CD tools, the container platform Docker, and other automation platforms.
From docs.github.com
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository or deploy merged pull requests to production. [...]
GitHub Actions goes beyond just DevOps and lets you run workflows when other events happen in your repository.
2. Continuous Integration/Continuous Deployment (CI/CD)
Usually, people run CI/CD tools to build, deploy, test, and run other tasks while doing that. We use another 3rd party CI/CD pipeline using Rake to build, test, and check links. Our pipeline invokes these small scripts you mention.
3. GitHub actions and scripts
From Essential features of GitHub Actions
If your job generates files that you want to share with another job in the same workflow, or if you want to save the files for later reference, you can store them in GitHub as artifacts. Artifacts are the files created when you build and test your code. For example, artifacts might include binary or package files, test results, screenshots, or log files. Artifacts are associated with the workflow run where they were created and can be used by another job. All actions and workflows called within a run have write access to that run's artifacts.
Here's the key point, I guess. You can really do a lot of crazy stuff within a workflow. All is related/specific to GitHub. Workflows are event-driven, meaning that you can run a series of commands after a specified event has occurred. For example, every time someone creates a pull request, you can automatically run a command that executes a test or other script.
4. GitHub action workflow and scripts
You can include different scripts in your workflow, e.g. using
Javascript: https://github.com/actions/github-script
Python: https://github.com/marketplace/actions/run-python-script
5. (Complex) Examples
You can check out the repository for docs.github.com for some more complex examples, see action-scripts and workflow folders. GitHub themselves seems to use it pretty heavily.
6. Advantages/Disadvantages of GitHub actions
OR: Differences to other CI tools
It took some time to find something not marketing-ish. Key points are:
beginner-friendly using YAML config files
no need to set up your own CI pipeline
You can check out this SO post from 2019 for a list of what's good and bad about GitHub actions.
In short - for readability and the DRY ("Don't repeat yourself") principle.
It's more or less the same as using functions in programming.
I can agree that some trivial actions are useless.
But "actions/checkout" for example is priceless!

Azure DevOps Release Pipeline using Packaged Build and Publish Profile

I am trying to create a release pipeline in Azure DevOps. We already have a functioning build pipeline that works well, it is able to package the build with VSBuild and publish it as an artifact. Then in the release pipeline I am using an IIS Deployment job (which includes IIS Manage and IIS Deploy tasks) and it gets that artifact to deploy.
The problem is that we already have a publish profile (.pubxml) that should take care of pretty much everything the IIS Deployment is doing (at least as far I as I understand it). So to me it seems I have two options that don't require me to refactor the project configuration itself.
I can try to mimic the settings on the IIS Deployment job to match our .pubxml as closely as possible and manually applying any changes that aren't doable through the task settings. Obviously this is not ideal as that would require us to update both when ever we make changes and it introduces a large chance of the pipeline breaking down over time.
I can scrap the idea of using IIS Deployment and just use a VSBuild task that uses arguments /p:DeployOnBuild=true /p:PublishProfile=Staging. This doesn't seem like best practices because it means my release pipeline isn't passing a build package to deploy, it is just creating a new one at each stage.
So is there a better option that would allow me to utilize the package I created with VSBuild and the .pubxml configuration together in a deploy? If that isn't possible then are either of my options the "correct" way to handle my situation or am I just missing another method of deployment I could use?
Thank you for any help or insight you can provide. Please let me know if there is any more information I can give that would be useful.
You can try using publish settings file (*.publishsettings) for your IIS deployment.
A publish settings file (.publishsettings) is different than a publishing profile (.pubxml) created in Visual Studio. A publish settings file is created by IIS or Azure App Service, or it can be manually created, and then it can be imported into Visual Studio.
To view more details, you can see:
Publish an application to IIS by importing publish settings in Visual Studio
Deploy your app to a folder, IIS, Azure, or another destination
So unfortunately there doesn't seem to be a way I can achieve everything I wanted in this. The publish profiles are required for when we build the project so without making changes to how we configure those I need to build the project whenever I want to deploy. Ultimately I went with option #2. I essentially just copied most of the build tasks used in the testing pipeline and placed those in the release pipeline with a few modified commands to actually deploy the build once finished. It all seems to work just fine but still doesn't feel like best practices. If I am missing something please let me know and I will make updates as appropriate.

During a release, how to get a list of server names deployed to from a deployment group in a task to use in another job?

What is the way to get a list of server names that were deployed to so they can be used in another job with a different agent in the same deployment pipeline?
We have a number of servers in a deployment group that get deployed to. We would like to point an automated test server to each of these environments to confirm the deployment went correctly. Therefor we need a list of the servers that were deployed.
Since the list of servers could grow or shrink we can't hard code all the servers to a variable.
As a workaround we created a Powershell step to call the REST API to get the deployment group machine details. However, we would like to achieve this using variables / outputs etc in the Azure Devops interface.
One thing to be aware of is that variables you might set by command do not persist between phases. If you want to know the deployment servers that were deployed during a phase, you will need to find those during the test agent phase you are executing.
I think you answered your own question though. I believe most of the answers you get will be to use the API to get the information that you are desiring. That being said, the only real sure-fire was I think would be for you to add a step to the deployment group phase and let it run the tests on the deployment server.
Not the cleanest solution, but you could also have the deployment group trigger a build definition passing the server name. The build task would just have the testing portion that you want to run. You could have that release step depend on the completion/status of the build definition.
Some features to keep in mind when implementing whatever you decide:
Automatically deploy to new targets in a deployment group
Deploy to failed targets in a Deployment Group
From what I can see, there is no easy way to get at what you want. As per designer documentation:
"When you specify multiple jobs in a build pipeline, they run in parallel by default. You can specify the order in which jobs must execute by configuring dependencies between jobs. Job dependencies are not yet supported in release pipelines. Multiple jobs in a release pipeline run in sequence."
I would imagine this is due to the added complexity inherent in allowing jobs to be run on x number of machines.
The yaml documentation doesn't seem to make the same distinction, but I think it is still a not yet feature, as yaml release pipelines as a whole seem to be a roadmap item.

Dependencies between BuildConfigurations in TeamCity when deploying

I'm having a hard time figuring out how to correctly deploy to different environments with TeamCity (in terms of cross BuildConfiguration dependencies) and hope to get some input as to how to configure my SubProjects/BuildConfigurations properly. Lets start based on a concrete example: I made this test "TeamcityConfigurationTests" to better learn how TeamCity handled dependencies, and the current state shows the result i am looking for:
I have 3 subProjects, Dev, Test and Prod - and all associated tasks for those "environments" as seperate build configurations within that subProject. This is to more clearly visualize what is going on, and if anything breaks, to be able to see immediately what is broken (separate Build, UnitTest and DeployToDev BuildConfigurations, rather than 3 different steps in one single Build Configuration).
Ideally, i only want to build my application once in the Dev.Build step, and let the Dev.UnitTest and Dev.DeployToDev steps grab that artifact and run tests and deploy. That i got going for me, by having snapshot and artifact dependencies. But i am having trouble getting the correct artifact when i want to deploy from Dev -> Test or Test -> Prod.
My issue is to correctly reference the latest successfully DEV deployed artifact when running Test.DeployToTest - and the same for getting the latest successfully TEST deployed artifact when running Prod.DeployToProd. (Essentially i want to promote the artifact to the next environment).
Now, my issue is, if I in the Test.DeployToTest have a SnapshotDependency to Dev.DeployToDev and an artifact dependency to Dev.Build, and the VCS source has changed since Deploy to Dev has run, it triggeres running all the DEV steps again. This is not the worst part, the same happens when i run Prod.DeployToProd if the VCS source changed since the initial build on dev (because of all the snapshot dependencies). Meaning, that rather than promoting Test -> Prod, I Build and deploy whatever is currently on VCS to Dev, Test AND Prod.
How am i supposed to set this up correctly?
The only other option i am aware of, is letting Dev.DeployToDev also publish the same artifact, and only have an (LatestSuccessful) ArtifactDependency in Test.DeployToTest. I would also have to publish the artifact again in Test.DeployToTest, for letting Prod.DeployToProd only have a (LatestSuccessfull) artifact dependency to Test.DeployToTest. (This would be to get rid of the SnapshotDependencies causing previous environments to run build/deploy again in case of VCS changes). But then i am publishing the artifact 3 times, rather than just the one time when the application is originally built in DEV - which i would like to avoid. Also, i have cases where no artifact is needed for deploying to Test and Prod, so there is no artifact to depend on (essentially i only need the BuildNumber from the "Dependent" environment i want to promote from).
I hope for some input. Thank you
Regards
Frederik
For anyone wondering, i made a JetBrains support ticket, and got the following response:
Basically, there are options to resolve your case:
Option 1: use "Promote" action form the build's Actions top-right menu
(or change the type of the Deploy* configurations to deployment and
use the action from the block on the build results. This is the
preferred way: before deploying you select the build to deploy and
"promote" it to the next environment. There is also an experimental
hidden feature to hide the "Run" button: add
"teamcity.ui.runButton.caption" configuration parameter in the build
configurations to empty value.
Option 2: do not use snapshot dependency, use only artifact dependency
on the latest successful build. However, when you run the build you
cannot be sure that the last successful build you see will be
deployed: while the build is standing int he queue, another
Dev.DeployToDev can finish and then be deployed as the last
successful.
We went with option 1

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.