Azure Deployment Failed - not enough free threads in the ThreadPool - deployment

I was deploying an Azure instance using Visual Studio (2010) and i get this:
06:20:05 - Preparing deployment for **** with Subscription ID: ****...
06:20:05 - Connecting...
06:20:06 - Verifying storage account '****'...
06:20:07 - Uploading Package...
06:20:38 - There were not enough free threads in the ThreadPool to complete the operation.
06:20:38 - Deployment failed
I tried again and it worked, but this has got me a bit worried...
Do I have a threading issue in my instance, is it in Visual Studio or some management system on Azure?
I can't find a single mention of this anywhere and it does seem a bit nonsensical; how can there not be enough threads to replace an instance?
I'm using parallels for a few cross federation lookups, but the system isn't even in production yet - the number of users can be counted on one hand - And even if it were, there presently is no more than one member in any federation...
I can't see any reason why there would be any problems - but i would very much like to know what on earth would cause this.
It's a single small compute instance with a web role and a worker role - latest Azure version and .NET 4.0

I have not seen anything like this before, but it appears to me that it is local to your development machine.
When you deploy from Visual Studio the first step is to copy the deployment files to your Azure storage account. You can see this is what was being done based on the message right before the notification of failure (06:20:07 - Uploading Package...'). This is happening as part of the push of the files to Azure storage and has nothing to do with your Azure project or any of the role size/definition.
I would not be concerned from an Azure perspective here.
Hope this helps some!

Related

How to resolve "No hosted parallelism has been purchased or granted" in free tier?

I've just started with Azure DevOps pipelines and just created a very simple pipeline with a Maven task. For now I don't care for parallelism and I'm not sure in which way I've added it to my pipeline. Is there any way to use the Maven task on the free tier without parallelism?
This is my pipeline:
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: Maven#3
My thought was that tasks are always parallel? Other than that I cannot see where's the parallel step.
First - tasks are always executed sequentially. And 1 sequential pipeline is documented as "1 parallel agent", yes naming could be better. Due to the changes laid out below new accounts now get zero parallel agents, and a manual request must be made to get the previous default of 1 parallel pipeline and the free build minutes.
See this:
We have temporarily disabled the free grant of parallel jobs for public projects and for certain private projects in new organizations. However, you can request this grant by submitting a request. Existing organizations and projects are not affected. Please note that it takes us 2-3 business days to respond to your free tier requests.
More background information on why these limitations are in play:
Change in Azure Pipelines Grant for Private Projects
Change in Azure Pipelines Grant for Public Projects
Changes to Azure Pipelines free grants
TLDR; People were using automation to spin up 1000's of Azure DevOps organizations, adding a pipeline and using the service to send spam, mine bitcoin or for other nefarious purposes. The fact that they could do so free, quick and without any human intervention was a burden on the team. Automatic detection of nefarious behavior proved hard and turned into an endless cat-and-mouse game. The manual step a necessary evil that has put a stop to this abuse and is in no way meant as a step towards further monetization of the service. It's actually to ensure a free tier remains something that can be offered to real peopjle like you and me,
This is absurd. 'Free-tier' is not entirely free unless you request again!
Best Option: Use self-hosted pool. It can be your laptop where you would like to run tests.
MS azure doc here
and use above pool in YAML file
pool: MyPool
Alternatively
Request access to MS:
Folks, you can request here. Typically it get approved in a day or two.
##[error]No hosted parallelism has been purchased or granted. To request a free parallelism grant, please fill out the following form https://aka.ms/azpipelines-parallelism-request
The simplest solution is to change the project from public to private so that you can use the free pool. Private projects have a free pool by default.
Consider using a self hosted pool on your machine as suggested otherwise.
Here's the billing page.
If you're using a recent version of MacOS with Gatekeeper, this "security enhancement" is a serious PITA for the unaware as you get 100s of errors where each denied assembly has to be manually allowed in Security.
Don't do that.
After downloading the agent file from DevOps and BEFORE you unzip the file, run this command on it. This will remove the attribute that triggers the errors and will allow you to continue uninterrupted.
xattr -c vsts-agent-osx-x64-V.v.v.tar.gz ## replace V.v.v with the version in the filename downloaded.
# then unpack the gzip tar file normally:
tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
Here are all the steps you need to run, including the above, so that you can move past the "hosted parallelism" issue and continue testing immediately, either while you are waiting for authorization or to skip it entirely.
Go to Project settings -> Agent pools
Create new Agent pool, call it "local" (Call it whatever you want, or you can also do this in the Default agent pool)
Add a new Agent and follow the instructions which will include downloading the Agent for your OS (MacOS here).
Run xattr -c vsts-agent-osx-x64-V.v.v.tar.gz on the downloaded file to remove the Gatekeeper security issues.
Unzip the archive with tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
cd into the archive directory and type ./config.sh Here the most important configuration option is Server URL which will be https://dev.azure.com/{organization name} Defaults are fine for the rest. Continue until you are back at the command prompt. At this point, if you were to look inside DevOps either in your new agent pool or Default (depending on where you put it) You'll see your new agent as "offline" so run:
./run.sh which will bring your agent online. Your agent is now running and listening for you to start your job. Note this will tie up your terminal window.
Finally, in your pipeline YAML file configure your job to use your local agent by specifying the name of the agent pool where the self-hosted agent resides, like so:
trigger:
- main
pool:
name: local
#pool:
# vmImage: ubuntu-latest
I faced the same issue. I changes the project visibility from Public to Private and then it worked. No requirement to fill a form or to purchase anything.
Best Regards,
Hitesh

How Do Service Connections Work For On-Prem Agents Connecting To On-Prem Services?

This question is purposefully general because I'm trying to understand things more from an architectural perspective, because that will impact which group I need to contact. My team is using Azure DevOps (cloud) with on-prem build agents. The agents connect to ADO via a proxy.
We use several tools in-house provided by vendors with ADO plugins in the Marketplace that require us to set up service connections. Because the services are installed on-prem, the endpoints we enter are not available via the Web (e.g. https://vendor-product.my-company.com).
If I log into the build machine and open up IE, I am able to connect to the service endpoint URL. However, whenever I try to run a task from ADO, it fails with some kind of connection-related issue ("The underlying connection was closed: An unexpected error occurred on a send", "Task ended with an exception: Error: read ECONNRESET", etc.).
The way I thought it worked, all the work takes place on the build machine itself, so the calls would be going from my-build-server.my-company.com to https://vendor-product.my-company.com. Those error messages though make me wonder if the connection is actually coming from https://dev.azure.com.
So the questions I have are:
For situations like this, is the connection to a service endpoint going to be seen as coming from my on-prem build agent, or from ADO (or does it vary based on how the vendor writes their plugin)?
If the answer to #1 is "it varies", is there any way for me to tell just from the plugin itself without having to contact the vendor? (In my experience some of the vendor reps don't understand how the cloud works.)
and/or
Because my build agent was configured to use a proxy when I set it up, is it going to use that proxy for all connections, even internal ones? I think I can set up a proxy bypass list for the agents but I presently only have read access to the build box. I can request temporary elevated access but I'd need some level of confidence that's what the issue is.
Hope I explained the situation clearly, thanks in advance for any insight.

VSTS Agent service can't get code coverage data when running as Local System

Short version: Two builds, A and B, for the same commit, both running on our build server using the VSTS agent service
Build A:
Agent running as Network Service
Saves a .coverage file of 267kb, showing non-zero % code coverage
Runs successfully, no errors, same test logs as build B
Build B:
Agent running as Local System
Saves a .coverage file of 1kb, showing 0% code coverage
Runs successfully, no errors (except that a quality gate fails due to the 0% code coverage, but that's intentional), same test logs as build A
Extra info:
The VSTS Agent service normally ran on our build server as "Network Service", and all was well. Until we had to modify the agent service to run as "Local System" so it could access a cert in the "LocalMachine" store which we need for Azure AD service auth. After that, it still claimed to do everything successfully except that the code coverage file is tiny and claims 0% code coverage, which is weird because the unit tests are certainly being run. The logs from the two test tasks are exactly identical (except for things like timestamps and the build numbers), no helpful warnings or errors in there.
I'm sure it's probably not ideal to run the agent as Local System, but that account has more permissions than network service does, so I don't know how it could be a permission issue. I've probably just made a mistake in setting up something, but it seems like the only way out of this is to either
give Network Service extra permissions (bad)
regenerate / move the Azure AD service principal cert into the "CurrentUser" cert store for Network Service (feels bad but I'm not sure why)
set up a new service account and resign ourselves to having permissions issues forevermore (ugh)
Can we somehow diagnose what exactly is going on with this test task without resorting to procmon? Or is there a better way to manage this stuff?
Well this is rather annoying: I fixed it, but I don't know how. While demonstrating it to a colleague, all I did was repeat my previous steps of rebooting the server and switching the agent service back and forth between the two accounts a couple of times, at which point the problem stopped being reproducible. It seems this is one of those mysteriously vanishing problems that hides whenever you try too hard to investigate it. Hopefully it doesn't come back...

Cannot create Services in IBM Bluemix

Every time, when I try to create a service in IBM Bluemix (web and CLI), the following error message appears:
Creating service instance my-compose-for-mysql-service in org XXX / space XXX as XXX...
FAILED
Server error, status code: 502, error code: 10001, message: Service broker error: {"description"=>"Deployment creation failed - {\"errors\":\"insufficient_storage\",\"status\":507}"}
How can free storage or fix the error?
I already did the following steps:
Delete all other spaces and apps
Delete all services
Reinstall CLI
-
This error message is stating that the compose backend has reached full capacity and does not have enough resources to create your service.
The compose engineers will be aware of this issue and will be working towards adding more capacity to the backend.
Please wait and try again later, or if urgent raise a support ticket.
Are you using the experimental version of the MYSQL service, which has been retired? The experimental instances were disabled recently on August 7, 2017. There is a newer production version of the Compose for MySQL service, which is available here: https://console.ng.bluemix.net/catalog/services/compose-for-mysql/
For more information about the experimental service retirement and its replacement, see: https://www.ibm.com/blogs/bluemix/2017/06/bluemix-experimental-services-sunset/
Okay, after reaching out to various support agents:
The problem is not a general bug. I was using a company related account which accumulate all databases of the company domain in one Sandbox which is just running out of storage. Compose seems to already working on it.
My solution until the official fix: Use a different non-business account to host the database.

Can I create plugins for an Azure Worker Role ?

I would like to make Worker Role in azure that handles some behind the scene processing for a web role. In the web role i would like to upload a plugin (a DLL most likely) which becomes avalible for the worker role to use.
What about security? If i was to let 3th party people upload a dll to my azure worker role. Can i do anything to limit what it can do. Would not be nice if they could take control over the management API or something like this.
I am new to azure and exploring if its a platform to use for this project.
Last question, i noticed that i could remote desktop my cloud service. Could i upload binary programs to that and call that from the worker role aswell? (another kind of plugin).
There are a few things you might want to look at. Let's assume your Worker Role is an empty shell. After starting the Worker Role you could start a timer that runs every X minutes to get the latest assemblies from a blob storage container for example.
You can download these assemblies to a folder and use MEF to scan them and import all objects implementing IWorkerRolePlugin for example (this would be a custom interface you would create). MEF would be the best choice when you want to work with plugins. You could even create a custom catalog that directly links with a blob storage container.
Now about the security part. In your Worker Role you could for example create a restricted AppDomain to make sure these plugins can't do anything wrong. This code should get you started: Restricted AppDomain example
Try the Azure Plugin Library by Richard Astbury!
Sounds like Lokad.Cloud is just what you need.
It has an execution framework part which consists of worker roles capable of running what they have named a Cloud Service. It comes with a web console which allows you to add new CloudService implementations by uploading assemblies, and if you configure it to allow for Azure self management you can also adjust the number of worker instances through the web console.