I just started working in Azure DevOps and I keep seeing this combination C:\a\11\s or D:\a\11\a at the beginning of file paths. When I search to find out what it means I get no results. The C or D is a "drive" reference, but what is this part \a\11\s or \a\11\a of the path referencing?
This is an implementation detail of DevOps. The a is short for agent, the 11 is the numerical ID of a specific agent (there might be multiple on one machine) and the s is short for source (directory with sources being built) and a is short for artifacts (i.e. the build results). The short names have been chosen to prevent build failures due to too long paths.
For example, underneath ..\s there is the complete directory tree of the source, which could be deep and use long directory and file names.
Tools used in the build process might have issues with paths longer than 255 characters.
This approach does not prevent this, but makes it less likely than if they had choosen verbose directory names.
The directories are also available by predefined variables. For Example Build.SourcesDirectory or Build.ArtifactStagingDirectory.
It's a local working folder on the pipeline agent machine. A search for e.g. "pipeline agent working folder" might find some info.
Related
This questions makes clear you can run multiple build agents on a single machine, by having one agent per directory e.g. c:\agent1, c:\agent2: VSTS build agents - Can one computer run multiple build agents?
But, when configuring these agents can they use the same work folder or must they be distinct e.g. c:\builds\ Vs c:\builds\1 c:\builds\2?
According to the MS docs:
The work directory is owned by a given agent and should not share between multiple agents.
In general, build agents check out the source code into the working directory, and then work with the sources being pulled from the repository. If you allow two agents to point to a single directory, you'll end up with a mess and unpredicted build results, at the very least.
I think one special case could be if you disable the option to check out the source code, and in this case build agents just run certain embedded scripts. But it's not a common case, anyway, and if you never share the working directory between the agents, you're always on a safe side.
In TFS (2013 Update 4) I am trying to write a PowerShell script to copy modified SQL files that are tied to a build. I can get and copy the appropriate files if I know the changeset number, which will often be enough (I can use the TF_BUILD_SOURCEGETVERSION environment variable when the build is triggered by a merge). However, occasionally there will be a handful of changesets that are associated with the build in TFS.
Using the Build Number, how do I get a list of Changesets?
You need to use your build number to find the previous build number. You will then have both a start changeset (from previous build) to current changeset (current build).
You can then walk the gap with the API and find all the intervening changesets.
So I've done this in my last engagement, in essence we solved it by doing a get of all SQL related files EVERY build and produce a csv file that contained information about each file, name, version, and most importantly and MD5 hash of the file. Then with each deployment we create/update/insert into a special deployment table in our DB all SQL "run" against that DB. Then our build script is really just producing the csv file but our deployment script has the intelligence and checks to see if anything as changed in the csv file vs. the target DB and only applies changes (new SQL, changed SQL with new MD5). So we essentially use two scripts. I can't share the scripts but you have the idea. Also I would look at this article by Alexander
Automating SQL Server Database Deployments: Scripting Details where he explains a lot about db migration.
When doing a get-latest from TFS, all timestamps are set to the time at which the get operation was executed. When doing running msdeploy to perform a sync, the timestamps in the source are compared with the timestamps on the target server. Of course, this means that with TFS + msdeploy, every file will be pushed to the target servers after every build, unless
You use incremental builds
You have only a single build agent in the build controller's pool.
If the build definition is set to do Clean builds, or if you want to utilize multiple build agents, then this no longer works.
This topic comes up all the time, and once every couple of years I cast out new lines in case something has changed. This could be fixed in a couple of different ways:
TFS sets timestamps on workspace files to the last checkin time.
TFS sets timestamps on workspace files to the last modified time from the files themselves when they were last checked in.
msdeploy uses some content-based comparison method (e.g. MD5) to compare files, rather than timestamp comparisons.
Something else?
I never know where to go to search for this stuff since both of these teams are pretty opaque--the webdeploy team in particular. Is this a problem that has been solved yet?
The TFS and visual Studio teams are entirely transparent and you can submit feature requests through http://visualstudio.uservoice.com and bugs through http://connect.microsoft.com.
However all files within a server workspace are set to the date the file was last modified on the server. Local workspaces physically compare the file contents to determin changes. You can change from local workspace to server workspace in the workspace properties.
In the end, we got around this by writing a powershell script to wrap the .cmd file produced by the Web Publishing Pipeline, and passing the -useChecksum flag in the command that invokes the .cmd script. Since the boilerplate .cmd created by WPP allows for passing additional arguments to msdeploy, we were able to accomplish this with a line like the following.
& "MyProject.cmd" /u:agent /p:P#ssw0rd /m:$ComputerName /y -useChecksum
In this way, even though TFS is creating workspaces with timestamps set to the get-latest time, msdeploy is now instructed to use checksums instead.
Background:
We have one Jenkins job (Production) to build a deliverable every night. We have another job (ProductionPush) that pushes out the deliverable over a proprietary protocol to production machines the next day. This is because some production machines are only available during certain hours during the day (It also gives us a chance to fix any last-minute build breaks). ProductionPush needs access to the deliverable built by the Production job (so it needs access to the same workspace). We have multiple nodes and concurrent builds (and thus unpredictable workspaces) and prefer not to tie the jobs to a fixed node/workspace since resources are somewhat limited.
Questions:
How to make sure both jobs share the same workspace and ensure that ProductionPush runs at a fixed time the next day only if Production succeeds -- without fixing both jobs to run out of the same node/workspace? I know the Parameterized Trigger Plugin might help with some of this but it does not seem to have time delay capability and 12 hours seems too long for a quiet period.
Is sharing the workspace a bad idea?
Answer 2: Yes, sharing workspace is a bad idea. There is possibility of file locks. There is the issue of workspace being wiped out. Just don't do it...
Answer 1: What you need is to Archive the artifacts of the build. This way, the artifacts for a particular build (by build number) will always be available, regardless of whether another build is running or not, or what state the workspaces are in
To Archive the artifacts
In your build job, under Post-build Actions, select Archive the artifacts
Specify what artifacts to archive (you can use a combination of below)
a) You can archive all: *.*
b) You can archive a particular file with wildcards: /path/to/file_version*.zip
c) You can ignore the intermediate directories like: **/file_version*.zip
To avoid storage problems with many artifacts, on the top of configuration you can select Discard Old Builds, Click Advanced button, and play around with Days to keep artifacts and Max # of builds to keep with artifacts. Note that these two settings do not control for how long the actual builds are kept (other settings control that)
To access artifacts from Jenkins
In the build history, select any previous build you want.
In addition to SCM changes and revisions data, you will now have a Build Artifacts link, under which you will find all the artifacts for that particular build.
You can also access them with Jenkins' permalinks, for example
http://JENKINS_URL/job/JOB_NAME/lastSuccessfulBuild/artifact/ and then the name of the artifact.
To access artifacts from another job
I've extensively explained how to access previous artifacts from another deploy job (in your example, ProductionPush) over here:
How to promote a specific build number from another job in Jenkins?
If your requirements are to always deploy latest build to Production, you can skip the configuration of promotion in the above link. Just follow the steps for configuration of the deploy job. Once you have your deploy job, if it is always run at the same time, just configure its Build periodically parameters. Alternatively, you can have yet another job that will trigger the deploy job based on whatever conditions you want.
In either case above, if your Default Selector is set to Latest successful build (as explained in the link above), the latest build will be pushed to Production
Not sure archiving artifacts is really a good idea. A staging repository might be better as it enables cross-functional teams to share artifacts across different builds when required by tweaking the Maven settings.xml file.
You really want a deployable (ear/war) as the thing that gets built, tested, then promoted to production once confidence is high with the build.
Use a build number on your deployable (major.minor.buildnumber). This is the thing you promote to production, providing your tests can be relied upon. Don't use a hyphen to separate minor with build number as that forces Maven to perform a lexical comparison... a decimal point will force a numeric comparison which will give you far less headaches.
Also, you didn't mention your target platform, but using the Maven APT/RPM plugin to push an APT/RPM to a APT/YUM repo that's available to a production box (AFTER successful testing!) would be a good fit, as per industry standards?
We have our build and deployment scripts set up in TFS 2010.
But we are also evaluating indeo Build Master. Has any one used this before?
Also, in general, for a full .NET house does it makes senses to have another SCM management tool?
Here is the link for inedo
I found this while researching Inedo's BuildMaster as well. We're a .NET/TFS shop, and BuildMaster solves all sorts of different problems.
Here's a blog post I found that discusses the differences:
http://blog.inedo.com/2011/06/06/how-does-buildmaster-compare-to-team-foundation-server/
We're using the free version of BuildMaster and may upgrade to enterprise once we use it for other projects.
Buildmaster has a TFS plugin that helps grab builds from TFS Builds. We use Gated check-in to ensure the code builds and Buildmaster to package the build for 1 click to deploy through the environments. Buildmaster has a fix forward approach (as in, no roll backs), where you create many builds for a release and each propagates through each environment and when 1 or more exist in say QA and have not moved to Staging, they will both be moved to staging at the same time, but in order, thereby ensuring all artifacts move through every environment.
Prior to Buildmaster, we used an xml driven PowerShell script that worked well, but Buildmaster agents saved us from remote desktop script execution. Our Powershell script has 1 advantage that Buildmaster does not yet have. We used the xml configuration file to hold application configuration file information, including file names, relative paths and xpath settings to inject values, xml fragments and remove xml nodes from configuration files coming from source control. Buildmaster uses template configuration files stored in Buildmaster, with tag replacement for each environment. This results in high maintenance should anything change in a configuration file, such as additional environment non specific sections being added, which would require creating the template again.
Buildmaster does have a custom action that allow you to run executables, so theoretically, you can run your own commands to perform functionality that Buildmaster does not have built in, but this is not ideal.