When creating a public workspace, does using the fully qualified name help to distinguish between computers with the same name? - azure-devops

I have two DevOps projects based on TFS. I need two public workspaces set to local (not server) to support each project. The projects' implimintation are rubber stamps of each other down to the computer names, though their fully qualified names and IP4 addresses are differnet because they are in differnt domains. The first project is finished and now we are constructing the second project.
On the second project I get the error - "TF400285: This operation cannot be performed on a local workspace that does not reside on this computer." I assume the error is because the machine name already exists in DevOps (for example, MYDEV). Can I change the short computer name of the first project's workspace (MYDEV) to a fully qualifed name MYDEV.MYFIRSTDOMAIN.local as to distinquish it from the second machine MYDEV.MYSECONDDOMAIN.local?
On the second project, I tried using aliased names to trick the server into creating the public workspace with a different computer name, but it looks like DevOps uses the computer's original short name and does not allow the action. I can create a workspace in the second project with the fully qualified name registered with the DevOps server, when using the workspace is marked as server (not local) but DevOps considers the workspace remote - I assume it is associating the workspace with the first project because of the machine's short name. I am tempted to change the first project's workspace computer name to a fully qualified name, if it will let me. But I wanted to check here first.

Related

What does "\a\11\s" in an Azure DevOps file path mean?

I just started working in Azure DevOps and I keep seeing this combination C:\a\11\s or D:\a\11\a at the beginning of file paths. When I search to find out what it means I get no results. The C or D is a "drive" reference, but what is this part \a\11\s or \a\11\a of the path referencing?
This is an implementation detail of DevOps. The a is short for agent, the 11 is the numerical ID of a specific agent (there might be multiple on one machine) and the s is short for source (directory with sources being built) and a is short for artifacts (i.e. the build results). The short names have been chosen to prevent build failures due to too long paths.
For example, underneath ..\s there is the complete directory tree of the source, which could be deep and use long directory and file names.
Tools used in the build process might have issues with paths longer than 255 characters.
This approach does not prevent this, but makes it less likely than if they had choosen verbose directory names.
The directories are also available by predefined variables. For Example Build.SourcesDirectory or Build.ArtifactStagingDirectory.
It's a local working folder on the pipeline agent machine. A search for e.g. "pipeline agent working folder" might find some info.

Deploying web app from Visual Studio Code to Azure but leave out a data folder

I am building a very small Node/Express API app in Azure using Twilio to route communication for a small group. I initially built out a data structure for users in CosmosDB but found out it's minimum $24 per month, which is way over budget for something that will likely hold 20 or so records. Because of this, is seems much more reasonable to just build this into a json file that sits in a ./json subfolder. However, it has occurred to me that whenever I deploy, I would be overwriting this file with the default file I have locally. I have been working via the Azure App Service tool in Visual Studio Code and can't figure out a way to make it ignore the file.
I can go into Kudu and copy the file down each time before I deploy, but I will eventually forget and this sounds like a very brittle process.
I added a json/ line to .gitignore, but that has no effect on the deployment (as expected).
I also added "appService.zipIgnorePattern": ["json{,/**}"] to the settings.json file, but instead of just ignoring that folder on the server, it erases it on deploy (the zip ignores it and then it wipes/replaces the whole wwwsite folder). Looking for the file gives me {"Message":"'D:\\home\\site\\wwwroot\\json\\users.json' not found."}
I was hoping there is a setting that would deploy, replacing all folders in the package, and ignoring all content in the ./json folder. Does this exist?
Alternative solution, 2021:
Instead of excluding folders, select the folder that you do want to deploy. Data in other folders will not be affected.
Deploy from: edit .vscode/settings.json in your local project and add "appService.deploySubpath": "./folderToDeploy"
Deploy to: In the Azure Portal go to your app service. Under Configuration / Application Settings add a new Application Setting with name SCM_TARGET_PATH and value ./folderToDeployTo
Using VS Code right+click deploy will deploy the contents of the folder. I was able to work around this by adding Azure as a remote branch and using .gitignore. I placed my json file inside a random folder (content/json) then placed /content/json in my .gitignore file.

VSTS Release Manager copy only specific files

I want to download a specific folder from my team project in VSTS and copy it to a server on premise. I've setup the vsts agent and it can copy files just fine by using the "Windows Machine File Copy", but my problem is the agent downloads my whole team project starting from the root.
In Artifacts when I choose Link an artifact source and under type choose Team Foundation Version Control, under Source (repository) I can only choose my team project $/myTeamProject in the dropdown list. I'm not able to provide a path in VSTS like $/myTeamProjet/Main/subfolder.
Is this the wrong approach? I basically want to copy some files from a subfolder in my team project in VSTS to a on premise machine, without downloading everything from the whole root folder ($/myTeamProject). It takes forever when I trigger a Release with a singe task that copies files. How can the agent map only a specific folder and not the whole root folder?
My opinion is that it's not a great approach. Your build should be publishing a set of artifacts that represents a full set of deployable bits that will remain static as you deploy them through a pipeline.
Think of this scenario: You have a release definition with a pipeline defined that goes Dev -> QA -> Prod.
You deploy to Dev. Your release definition pulls in Changeset 1234 from source control.
A few hours later, you deploy to QA. Your release definition pulls in Changeset 1234.
Someone changes some source code. You go to deploy to Prod. Your release definition pulls in Changeset 1235. Now you're deploying some stuff that hasn't been tested in a lower environment. You've just drastically increased the likelihood of a problem.
Same scenario applies if you ever want to redeploy an old version to try to repro a bug.
In short: publish that folder as an artifact as part of your build process.
In release definition, you can’t specify part of files to download from the artifacts (and the artifact source link is for you to choose artifact from which build definition).
But you can specify the files you want to copy by Windows Machine File Copy task. For the source option in Windows Machine File Copy you can specify the subfolder you want to copy, such as $(System.DefaultWorkingDirectory)/BuildDefinition/drop/Main/subfolder.

Azure Continuous Integration Overwriting App_Data even with WebDeploy file specified to "exclude app data"

I have a Windows Azure Website and I've setup Azure Continuous Integration with hosted Team Foundation Server. I make a change on my local copy, commit to TFS, and it gets published to Azure. This is great, the problem is that I have an Access database in the ~\App_Data\ folder and when I check-in the copy on Azure gets overwritten.
I setup a web-deploy publish profile to "Exclude App_Data" and configured the build task to use the web-deploy profile, and now it DELETES my ~\App_Data\ folder.
Is there a way to configure Azure Continuous Integration to deploy everything and leave the App_Data alone?
I use the 'Publish Web' tool within Visual Studio, but I think the principles are the same:
if you modify a file locally and publish, it will overwrite whatever's on the web
if you have no file locally - but the file exists on the web - it will still exist on the web after publishing
The App_Data folder gets no special treatment in this behaviour by default. Which makes sense - if you modified an .aspx or .jpg file locally, you would want the latest version to go on the web, right?
I also use App_Data to store some files which I want the web server (ASP.NET code) to modify and have it stay current on the web.
The solution is to:
Allow the web publishing to upload App_Data, no exclusions.
Don't store files in App_Data (locally) that you want to modify on the web.
Let the web server be in charge of creating and modifying the files exclusively.
Ideally you would not have to change any code and the server can create a blank file if necessary to get started.
However if you must start off with some content, say, a new blank .mdf file, you could do the following:
Locally/in source repository, create App_Data/blank.mdf (this is going to be a starting point, not the working file).
In Global.asax, modify "Application_Start" to create the real working .mdf file from the blank starting file:
// If the real file doesn't exist yet (first run),
// then create it using a copy of the placeholder.
// If it exists then we re-use the existing file.
string real_file = HttpContext.Current.Server.MapPath("~/App_Data/working.mdf");
if (!File.Exists(real_file))
File.Copy(HttpContext.Current.Server.MapPath("~/App_Data/blank.mdf"), real_file);

Web.config Versioning

Currently I am using a shared database model for our development. I know, it's better to use local development databases to do database versioning the right way without one developer breaking everyone else's code. So that's what I'm trying to get to. I have a question about the web.config file though. How do I ensure that once every dev has his own local development database, he doesn't have to manually change the DB connection string every time he gets an update from source control? What's the best way to do this?
For example, say Johnny Dev commits his web.config that holds a connection string like this:
server=JohnnysBox;database=JohnnyAppDev1;
So now Susie Dev gets an update and she has to change her connection string to this:
server=SUE;database=development;
So now Susie and Johnny keep committing their own connection strings to the web.config file, and every time they get an update, they have to change the connection strings in all applications.
What's the best way to handle this situation so that devs don't mess up each others' connection string settings, but can push other kinds of config file changes to all the other devs when necessary (like a new app setting)?
It's only a partial solution, but you could have all the developers create an alias for their own SQL server using cliconfg.
Then the web.config in source control will have eg:
server=LocalServerAlias;database=development
For configuration or settings files, what you need to version is:
a template files (server=#USER_NAME#;database=#DATABASE_NAME#;)
one or several value files
one script able to replace the variables by the right values
What we do here is to never commit the web.config file to source control. Instead, we commit a web.config.sample file, and each developer merges changes in that file into their own personal web.config file. It's each developer's responsibility to handle those merges.
The way I deal with this is to just not check in developer-specific changes to config files.
When a config change needs to be checked in, I start from a 'clean' config file and make the needed changes, then check in. When everyone else does a get latest, they can merge these changes into their local versions.
The solution we came up with at my office was that we specifically exclude the web.config from version control, but only in the www folder. This allows developers to make whatever changes they need locally.
In a separate folder, we have a "master" copy of the web.config which is version controlled. As new sections, keys, etc. are added, it's the developer's responsibility to update the master copy.
You can create multiple Web.config files depending on the environment the application is running in. Using the transformation syntax you can modify the main Web.config to include or comply with your own local settings.
http://msdn.microsoft.com/en-us/library/dd465326(VS.100).aspx
Afterwards, exclude this custom Web.xxx.config from your repository.
We branch the web.config. So, i've got one called Mattweb.config and I can change it at will, and it replaces the web.config ON MY LOCAL MACHINE ONLY with the contents of Mattweb.config. It's requires no intervention by me.
We also branch the "real" web.config, so that I can compare with my own local version to see if any appsettings were added or any other types of changes. Then I just update my Mattweb.config file and all is well again.
Use (local) as the sql server name and it always refers to the local server. This should be the default value in the web.config you check into source control.
For production "installs", your installer should ask the user if they want to use a remote sql server and if so, update the web.config files as part of the install process.