Problem
I am trying to deploy a worker role which will autoscale a few target sites. I am able to run the autoscaler locally and it works (I installed the certificates on my machine). However, it won't autoscale once I deploy it to Azure as a cloud app. (However, the worker role is running because I can see my non-autoscaling processes working in the same worker role.)
What I tried
I have followed the Deploying the Autoscaling Application Block instructions.
Added the "CN=Windows Azure Tools" certificate to the management certificates of the target subscription.
Added the "CN=Windows Azure Tools" certificate to the autoscaling application's certificates.
Specified the location of my cert in the worker role
Specified the location of the cert in my service store for configuring autoscaling
What am I missing?
Thanks
Tuzo is right - cert should be in LocalMachine, but that's not enough. See this SO post. Basically, in OS Family 2, WaWorkerHost is running under a temporary account (with a GUID name) generated by Role initialization process, this account has permission to access certificate private key; In OS Family 3, WaWorkerHost is running under “NETWORK SERVICE” account, this account doesn’t have private key access permission.
Best option for now (MS Azure team addressing issue in next SDK) is to run the role with elevated privileges - edit ServiceDefinition.csdef:
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="blah" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2012-10.1.8">
<WorkerRole name="blah" vmsize="Small">
<Runtime executionContext="elevated" />
...
</WorkerRole>
</ServiceDefinition>
For running in Azure I would try setting the Store Location to be LocalMachine.
If you've followed all the steps in the Deploying the Autoscaling Application Block then the certificate with the private key (.pfx) should be deployed in the role. You can RDP into the server to verify that the certificate is installed (and the location).
You can also try enabling logging as per Autoscaling Application Block Logging to see if there are any messages.
Related
This is driving me crazy, been trying to get this to work for 3 days now: I'm trying to connect a kubernetes deployment to my Cloud SQL database in GCP.
Here's what I've done so far:
Set up the cloud SQL proxy to work as a sidecar in my deployment
Created a GKE service account and attached it to my deployment
Bound the GKE service account to my GCP service account
Edited to the service account (to what I can tell) is owner permission
Yet what I run the deployment in GKE I still get:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
How can I fix this? I can't find any documentation on how to set up the service account to have the correct permissions with Cloud SQL or how to debug this issue. Every single tutorial I can find ends with "bind your service account" and then stops. Nothing that describes what permissions are needed, and nothing about how to actually connect to the DB from my code (how would my code talk to the proxy?).
Please help
FINALLY got it to work!
Two major pieces that the main article on this (cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) glosses over:
Properly setting up workload identity, for which I found these links to be very helpful:
a) https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
b) https://www.youtube.com/watch?v=l-nws1e4B8M
To connect to the DB you have to have your code use the DB host 127.0.0.1
i have an app (stateless) running as C# executable in my SF cluster. the App uses Managed identity to connect to Azure Key Vault. i have granted Key vault accss policy to the Virtual Machine Scale set managed identity, but when app tries to connect to Key vault, it gets exeption
" Azure.Identity.AuthenticationFailedException (-2146233088)
DefaultAzureCredential failed to retrieve a token from the included credentials.
EnvironmentCredential authentication unavailable. Environment variables are not fully configured?
Most of the articles talks about this exception when running on local machine. But i am running SF on azure, but still getting exception.
Any pointers on how to troubleshoot further.
We have a release pipeline that is failing with following message:
resource ID for resource type 'Microsoft.Web/Sites' and resource name
'appservicename'. Error: Could not fetch access token for Managed
Service Principal. Please configure Managed Service Identity (MSI) for
virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400,
status message: Bad Request
We have 2 different service connections:
Azure Resource Manager using service principal authentication
Azure Resource Manager using managed identity authentication
The first one works like a charm. However, because the developer wanted to limit admin access on the Azure AD, he tried creating a managed identity authentication service connection which at first glance, since it allowed us to select the App Service, appeared to indicate it's working, until an actual deployment was triggered and it failed per the error message above.
After numerous searches online, I think this answer may be the clue to why this is failing with the managed identity authentication service connection yet succeeding with the service principal connection just fine.
I just want to confirm, is this truly the case? that a hosted agent doesn't support MSI based authentication, which is what we are using… or has that changed?
We are indeed using Microsoft agent pool.
It doesn't make sense for our app service to use a VM at this time. The use case just isn't applicable for the dashboards we have.
As it is written in the docs:
You are required to use a self-hosted agent on an Azure VM in order to use managed service identity
I assume that it was alway like that. Here we are talking abut MSI assigned to VM which serves as build agent. Not MSI which is identity of App Service. Why? Service Connection is an abstraction which makes easy authentication to your Azure Subscription. So it gives identity to VM and then when your perform some action against your Azure thanks to MSI Azure know that can perform that action. Another aption is authentication via Service Principal, but thi can be done from any VM (inlcuding MS Hosted) because it relies on Client Id and Client secret which is kept in service connections. And MSI have to be assigned to particular VM which cannot be done with MS Hosted agents.
in the past, we used VSTS build agents, running with domain accounts on on-prem build machines. In such scenario, certificates could be stored into the domain accounts personal store (manually, by logging in once with this account). So a later build could get the certificates by thumbprint for signing e.g. a manifest.
Now, the agents run with "Network Service", because we no longer have a local domain (all moved to Azure AD). All works, except the retrieval of certificates from the store. I already used the mmc snap-in to connect to the service (VSTSAgent), and installed certificates to this personal store, but still the build fails with "Error MSB3323: Unable to find manifest signing certificate in the certificate store.".
If I log-on to the machine and run from within VS, all works well, but of course here I am using a different account (with a different personal store), but this at least tells me that solution & projects are fine. And the pipelines are OK as well, because they still work OK on the "old" build-machines that use a domain account.
So, if anyone has an idea or can point me to some information on how to use the VSTSAgent running as "Network Service" together with signing (from the certificate store), that highly appreciated.
Many thanks, Sebastian
I created new secure service fabric cluster on azure with cluster and admin client certificates in keyvault on azure. I installed the admin client certificate for current user and local machine stores but whenever I try to connect that cluster or explore it in browser its gave access denied error. I am also trying to connect from visual studio but it failed. In visual studio following is connection parameters:
<ClusterConnectionParameters ConnectionEndpoint="my.end.point.com:19000"
X509Credential="true"
ServerCertThumbprint="ClusterCertificateThumbPrint"
FindType="FindByThumbprint"
FindValue="AdminClientCertificateThumbPrint"
StoreLocation="CurrentUser"
StoreName="My" />
What I am doing wrong?
I experienced something similar, my issue was that I had the wrong servercertthumbprint. I created my service fabric cluster as part of the visual studio publish step and in that case the configuration looked like this:
<ClusterConnectionParameters ConnectionEndpoint="myservicefabricname:19000"
X509Credential="true"
ServerCertThumbprint="certicateThumbprint"
FindType="FindByThumbprint"
FindValue="certicateThumbprint"
StoreLocation="LocalMachine"
StoreName="My" />
The thumbprint used for the local certificate and the service fabric one has the same certificate thumbprint.
Additionally, it seems that even though I added the ClusterConnectionParameters in the xml config, when I went "Publish" and expanded "Advanced Parameters" I had to manually enter the values.
In case you don't know how to find the thumbprint you can follow this tutorial: https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-retrieve-the-thumbprint-of-a-certificate