We got a TeamCity server which produces nightly deployable builds. We want our beta tester to have access these nightly builds.
What are the best practices to do this? TeamCity Server is not public, it is in our office, so I assume best approach would be pushing artifacts via FTP or something like that.
Also I have no clue how to trigger a script when an artifact created successfully. Does TeamCity provide a way to do that?
I don't know of a way to trigger a script, but I wouldn't worry about that. You can retrieve artifacts via a URL. Depending on what makes sense for your project, you could have a script set up on a scheduler (cron or Windows Scheduling) that pulls the artifact and sends it to the FTP site for the Beta testers. You can configure it to pull only the latest successful artifact. If you set up the naming right, if the build fails they beta testers won't notice because the new build number just won't be there, no bad builds would be pushed to them.
Read the following help page from the documentation. It shows how you send commands from your build script to tell teamCity to publish the artifacts to a given path.
In TeamCity 7.0+ you can use Deployer plugin. Installation steps can be found here. It also allows to upload artifacts via SMB and SSH.
I suggest you start looking at something like (n)Ant to handle your build process. That way you can handle the entire "build artifacts" -> "publish artifacts" chain in an automated manner. These tools are dependency based, so the artifacts would only be published if the build succeeded.
Related
I am trying to create a release pipeline in Azure DevOps. We already have a functioning build pipeline that works well, it is able to package the build with VSBuild and publish it as an artifact. Then in the release pipeline I am using an IIS Deployment job (which includes IIS Manage and IIS Deploy tasks) and it gets that artifact to deploy.
The problem is that we already have a publish profile (.pubxml) that should take care of pretty much everything the IIS Deployment is doing (at least as far I as I understand it). So to me it seems I have two options that don't require me to refactor the project configuration itself.
I can try to mimic the settings on the IIS Deployment job to match our .pubxml as closely as possible and manually applying any changes that aren't doable through the task settings. Obviously this is not ideal as that would require us to update both when ever we make changes and it introduces a large chance of the pipeline breaking down over time.
I can scrap the idea of using IIS Deployment and just use a VSBuild task that uses arguments /p:DeployOnBuild=true /p:PublishProfile=Staging. This doesn't seem like best practices because it means my release pipeline isn't passing a build package to deploy, it is just creating a new one at each stage.
So is there a better option that would allow me to utilize the package I created with VSBuild and the .pubxml configuration together in a deploy? If that isn't possible then are either of my options the "correct" way to handle my situation or am I just missing another method of deployment I could use?
Thank you for any help or insight you can provide. Please let me know if there is any more information I can give that would be useful.
You can try using publish settings file (*.publishsettings) for your IIS deployment.
A publish settings file (.publishsettings) is different than a publishing profile (.pubxml) created in Visual Studio. A publish settings file is created by IIS or Azure App Service, or it can be manually created, and then it can be imported into Visual Studio.
To view more details, you can see:
Publish an application to IIS by importing publish settings in Visual Studio
Deploy your app to a folder, IIS, Azure, or another destination
So unfortunately there doesn't seem to be a way I can achieve everything I wanted in this. The publish profiles are required for when we build the project so without making changes to how we configure those I need to build the project whenever I want to deploy. Ultimately I went with option #2. I essentially just copied most of the build tasks used in the testing pipeline and placed those in the release pipeline with a few modified commands to actually deploy the build once finished. It all seems to work just fine but still doesn't feel like best practices. If I am missing something please let me know and I will make updates as appropriate.
I'm trying to use TFS 2015.1 on premise to build a CI pipeline for our dev & uat. I've created a vNext CI build, which builds fine. But when I want to add a deploy step for on prem IIS server, I only then see Azure Web Deployment options.
Ideally I wanted to add a step which uses the existing deploy (MS Deploy) profiles, which I'm able to use from VS2015 directly, using 'Publish'. However I see no option to do so.
How can I deploy the latest build to internal dev servers (not Azure)? I would like to use the MS Deploy option, unless there's a better way of doing it?
The fact that their is no option to starts to make me think there's probably a different way to accomplish it!
Thanks.
If you're able to upgrade to TFS 2015.2, web-based Release Management came out with it that works similarly to Build vNext with flexible and open-source tasks. You can also customize tasks.
Here's a link for IIS Web App Deployment from the vso-agent-task's GitHub repo where Microsoft stores updated versions of their tasks that you can download for web-based Build and Release Management.
I'll be publishing a blog about web-based RM with TFS 2015 Update 2 or VSTS on my website in the next few weeks. To give you an idea though, the starting point (for a web application) is a folder in your web project called WebDeploy (no significance - any name will do) that contains a PowerShell DSC script that configures the server, deploys the web files and then replaces any tokenised configs. To give you an idea see this post about how to use DSC to configure servers. (Only covers part of the final script though!) The next steps are:
In the build hub create a Website artifact - containing your web files and DSC script.
In the release hub for an environment use a Windows Machine File Copy task to deploy the artifact to a temp folder on the target node.
Then use a PowerShell on Target Machines task to execute the DSC script. After configuring the server the script copies the web files to their proper location, sorts out config using xReleaseManagement and cleans up the WebDeploy folder.
See this article for general details of the route I'm taking, but watch out as it has some errors eg the firewall instructions are incomplete (file and print sharing through the firewall needs to be enabled).
I can thoroughly recommend the PowerShell DSC route - I've had a few glitches but on the whole it feels very productive and the right way to be going.
We've got a TeamCity (9.1) build configuration which is based on several snapshot dependencies to build correctly. I'm looking for a convenient way to provide each developer with a way to set up a proper build environment on their desktops. For this, I would like to download all the snapshot dependencies for a given build configuration from the TeamCity server onto the developer's desktop using the REST api.
I'm aware of how to access artifacts using REST. But this would address the artifacts created by a specific build configuration. I'm looking for a way to download all artifacts used by a given configuration specified by the dependencies.
There isn't an easy way to do this, however, it's not impossible. My answer is provided below followed by a possible alternate solution.
Answer:
The artifacts used by your target build are really just the artifacts that were created by its dependencies right?
I think what you are looking for is referenced here where you can query a build for all of its Snapshot Dependencies.
Once you have a list of the dependencies you would then need to query each of them for the artifacts they generated and then you could proceed to download them.
It's not the most straightforward thing and would require some slick Powershell or Python or whatever, but it is doable.
Another Idea:
Have you looked into something like Artifactory? It sounds like what you really need is a binary repository of sorts to track artifacts used, and artifacts created.
Or for small projects, you could probably get a way with just using a file share on the network where the build could "copy" to the share organizing files into "build" directories of some sort and then developers could "read" from the share.
I have started using the free Jenkins build service on BuildHive for one of my GitHub projects. This is also my first try doing anything with Maven. I have succeeded in building my project using this script on BuildHive:
cd base_dir
mvn package
The build log shows that the resulting JAR has been built. Now I would like to offer the JAR to my project's users as a download artifact because GitHub has discontinued the feature of manually uploading binaries in a separate download section.
Is there any way I can download an artifact, referencing it by a URL? If so, how do I construct the URL, knowing only the artifact's local path from the build log?
Alternatively, is there a way in which I can push the artifact to another place by adding a command to my build shell script after mvn package? I was thinking of something like a curl or ftpput commmand.
The best thing I was able to come up with as a quick workaround was to upload the artifacts in question to my FTP server via curl, as suggested by my original question. It works, but the downside are the FTP credentials in the build public log. I have counterbalanced that by a shell script on my DSL router which checks for FTP storage abuse every few minutes.
As an alternative I found that after creating a free CloudBees account for my little open source project, I got my own Jenkins build configuration as well as my own artifact repository where to deploy my build artifacts. This is much more elegant and does not involve posting any FTP credentials to a public server.
I am still open for BuildHive-only solutions if anyone has a smart idea. :-)
We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.