I'm setting up a build definition in Visual Studio Team Services using a Build Agent installed on my local machine for testing.
I'm following these instructions for creating a build agent, setting up a build definition, and queuing a build. I've created the agent on my local computer and it appears in the agent pool in VSTS. The agent is enabled and ready to go. I've also created a build definition that invokes my build script. Everything up to this point appears to work fine.
At this point I'm ready to queue a build and run it. The dialog for this looks like:
The dropdown labeled "Queue" only shows the Hosted agent pool. There should be a second pool called Default but it is not appearing. I can get it to "appear" by right clicking and inspecting the HTML and then using dev tools to change the value for the Hosted option. Hosted's ID is 2, I changed it to 1 since I assumed this to be the ID for Default. Once I do this I can click "OK" and the build runs as expected -- everything is checked out on my local machine by the build agent. Presumably my assumption about the ID value is correct.
So...everything it working correctly once I muck around with the plumbing a bit. But this is definitely not the way things should be working. Why is the Default queue not showing up in the dropdown? Do I need to flip a switch somewhere to make it work? Does my account not have enough access?
Some other details:
My account is a "Pool Administrator"
The build agent is not installed as a Windows service. I start it manually from a command prompt. I've not been able to install it as a service.
The machine that has the build agent installed on it is running Windows 10 x64 Pro. It was upgraded from Windows 8 x64 Pro.
I cannot use a hosted agent as I'm building a Unity project and Unity is not supported by for hosted agents.
I know I can use Unity Cloud Build but I do not want to.
UPDATE
I've removed my previous Build Agent and installed a new one, as a service, on a Windows Azure VM running Windows 10 Enterprise x64. With this change the "Hosted" and "Default" queues are appearing as expected.
Your account needs to also have access to the agent queue. Agent pools and agent queues are different entities, and being a "pool administration" does not necessarily mean you are a "queue administrator".
In my case it helped to execute the agent configuration in a console with elevated/administrator rights. If the agent configuration is done in a console with normal rights, the agent can still be configured properly, but its queue won't appear for selection when you will be queuing a new build.
Related
I want to make a multi deployment server using the "Deploy BizTalk Application" task on azure release pipeline from a deployment group job, but the task install all the artifacts (add resources, GAC, bindings) on the 3 servers . is there a way to limit the bindings and adding resources only on the first node?
the current behaviour generate an exception :
Concurrency Violation encountered while Updating
Other thing, in BizTalk 2016 FP2 MSFT has added an enhancement for deployment group, someone knows what is really changed?
This is going to be a long post, so hang on.
You want to learn about BTDF (BizTalk Deployment Framework). I wrote a whole guide for my internal team, so I cannot share that easily. But I'm going to try to explain what you need to do.
1) In our Azure DevOps organization, add the extensions "Deployment Framework for BizTalk" and "BTDF Project Updater" (I wrote that one, but is optional to update the version number for the generated MSI).
2) There are guides online, but learn how to make your project into an MSI and deployable using the BTDF within the Build Pipeline. Leverage "BTDF Project Updater" to increment the version number.
3) Now, you said you have 3 servers within your BizTalk environment. During a manual BizTalk deployment Server 1 and Server 2 get a "light" BizTalk deployment and Server 3 gets the FULL BizTalk deployment. This means during the Release Pipeline to do a release on Servers 1 and 2, but do it a tiny bit different on Server 3.
3.1) Create a normal Agent Pool for Server 3, and associate the ADO Agent on Server 3 to that.
3.2) Create a Deployment pool and associate the agents for Server 1 and Server 2 (I think you've done that already)
4) Create your Release Pipeline for that particular environment, but we are going to put in 2 Agent phases instead of the default 1. Notice that I am using different types of agent jobs for the pools created above.
For the Release Pipeline tasks for each agent I just happen to be using a task group template because I have any release pipelines. I have one called "Standard Deployment - Not Final" and one called "Standard Deployment" (which is the final). WHY? BizTalk requires application binaries and certain other artifacts to be installed on every BizTalk server that runs the application. However, the BizTalk application, its port bindings, rule policies and more must be registered in the BizTalk databases only once within the group. This is the reason for the checkbox. Notice it is unchecked for the "Not Final" and checked for the one that will be the "Final". The installation will go very quickly on Servers 1 and 2, but take longer on Server 3 because of this.
Now you can start your ADO Release Pipeline to test it out. But this process work great and I'm using it in conjunction with GitVersion.
I know I left a lot out of this guide, like the actual details of the tasks for stopping the BizTalk app via the PS script, UnDeploying the BizTalk app, unInstaling the MSI, why am I copying the MSI to the installation directory, and then install. You can read more from the official documentation here: http://www.tfabraham.com/BTDFDocs/V5_5/DeploymentFrameworkForBizTalkDocs.html?DeployConfigurationSettingsintoS.html
I hope this helps!
I have installed TFS 2018 and I'm trying to setup dedicated build server for this.
I have three windows servers one for TFS-WindowsBox1(TFS 2018 Installation completed), one for Build server-WindowsBox2(Build server setup steps and architecture needed), DB-WindowsBox3(DB Installation completed).
I'm looking for build server setup on WindowsBox2 and I'm looking best practices or steps to follow.
In administration section, I'm seeing agent download option in agent pools tab.
If i download and install agent on windowsbox2 server , will that be considered as Build server.
And what are the differences between agents and build server setup ?
TFS has no concept of a "build server". If an agent is configured on a box and the agent is running, then that box can run builds (and releases, since the release agent is the same piece of software). That's all there is to it. Build agents are assigned to agent pools, which dictate the set of available agents.
In previous iterations of the build system (XAML build, TFS 2010 - TFS 2013, although it's configurable up to TFS 2018), you had to register build controller and assign build agents to the build controller. XAML build is deprecated and should not be used except in the cases of pre-existing legacy builds, so if you're not already using XAML build, you can safely ignore this paragraph.
You can refer to this article (Deploy an agent on Windows) to setup TFS build/release agent, after that this is your "build server".
There are interactive and service mode and by default it is running in interactive mode, for this mode, you need to call run.cmd (in the same folder of config.cmd) to start agent, then the agent state will be online.
With service mode, you can check whether the corresponding service is running or not in Services.
Reading the VSTS documentation about Build and Release Agents, that says:
Each agent automatically updates itself when it runs a task that requires a newer version of the agent. But if you want to manually update some agents, right-click the pool, and then click Update all agents.
That doesn't work for me.
I tried to "right-click the pool, and then click Update all agents", the status change to "Downloading version ....". But I can't see any change with the agent.
Every time, I have to uninstall the agent, download the new version and reinstall it again. I've checked directory permissions and everything looks fine. The agents are installed on a Windows Server 2012 x64.
Any idea?
It takes some minutes (per to the environment, such as network) to update the agent and will be restarted automatically, then you can check the Agent.Version value in Capabilities.
Are your agents in a machine behind a proxy?
In this case, you need to configure the proxy:
Add a file named .proxy in the root folder where the agent is installed
Write as content the proxy address to be used, for example http://192.168.0.1:1234
If your proxy needs authentication, you must set these environment variables:
set VSTS_HTTP_PROXY_USERNAME=user
set VSTS_HTTP_PROXY_PASSWORD=password
Restart the agent service to apply the change
The agent should now be able to connect to internet and download to apply the update.
My protractor tests work fine on my local machine and on Azure VM Windows Server 2012R2 when accessed via RDP. I explicitly set browser window resolution in my tests using browser.driver.manage().window().setSize(1600, 900); and it allows tests to work properly.
However, when the VM mentioned above is used as build machine, controlled by VSO (VSTS) agent, my protractor tests are failing. I suspect it happens because screen resolution for VSO agent session is smaller then desired resolution specified in my tests and WebDriver(ChromeDriver) can't set resolution higher than OS limitation.
My question is how to change screen resolution of Azure VM for VSO agent session?
I tried custom utility for changing screen resolution from here
and it works on my PC, however when it is executed by VSTS agent on Azure VM it throws error:
System.InvalidOperationException: The display driver failed the specified graphics mode.
In order to run the protractor tests, the agent needs an interactive session. Configure the agent to run interactively, instead of as a service.
It did help to run agent in interactive. When I connect to my build machine via RDP it gets screen resolution of my client machine. Then when I launch vso agent and disconnect by RDP, this display resolution remains on build machine, so selenium can maximize browser window.
When I deploy a web role to azure using the management portal, the process takes about 20 minutes. But, when I deploy using visual studio, it can take hours, and it's stuck in "Initializing"/"Waiting for host". Eventually, it does deploy and run normally.
Any thoughts on what's wrong?
Notes:
I'm deploying through visual studio in order to be able to use Intellitrace and the web deploy.
No errors appear at any time during the deployment
Installing web deploy, RDP, any plugins, etc. will lengthen the deployment time. I am pretty sure that Web Deploy in particular will cause the machine to reboot which adds a few mins. This is probably what you notice most (VM's reboot and it takes awhile to get back).
I believe that deploying from Visual Studio releases the host(s) and acquires a new one. This is evident by the fact that the IP addresses associated with the roles typically change when you deploy from Visual Studio.
But upgrading via the Portal re-images the existing host(s).
That presumably accounts for a significant proportion of the time difference, especially if there isn't a host available in the relevant upgrade or fault domain.