Key Vault Works on local development but not on a deployed web app - azure-devops

I've been working on this for several days now, and after reading many documentations on this I am absolutely out of ideas.
What I've done so far:
Created three web apps (DEV, UAT, PROD) and switched System Assigned to ON to create a Managed Identity. This also registered the web
applications into the Azure Active Directory.
Registered an application in Azure Active Directory that includes redirect URI to UAT, DEV, Prod, and local URL.
Created Key Vault under the same resource group as the web apps for DEV, UAT, PROD.
Included all four applications in the Access Policy of the Key vault with GET and LIST permissions.
App Service Authentication is set to OFF (app service -> Authentication/Authorization -> OFF). We're using another means of user authentication.
After this setup, the key vault is accessible on localhost but not on the deployed environment. The reason why I believe so is because it's not retrieving the connectionString from the key vault.
I've consumed all sorts of documentation on this including https://learn.microsoft.com/en-us/azure/key-vault/service-to-service-authentication.
Project is .Net Framework 4.7.2
In the code, I have which retrieves my secrets locally but not on deployed:
public static string GetSecret(string SecretName, string vaultURL)
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
return keyVaultClient.GetSecretAsync(vaultURL, SecretName).Result.Value;
}
Please let me know what it is that I'm missing that's preventing my web app from accessing the key vault.

Make sure that you have updated any information that is specifically related to the local application. For example, if it was working locally and is not working when published, it's possible that there's a reference in the code or in the portal to the localhost environment, and this needs to be updated to the published environment. Sometimes in the app registration people will leave in a reference to the localhost URL and not add the published web page URL both there and in the code.
Also, please share the error that you are receiving.

After some careful debugging, I found out that my issue was actually a relatively simple one. The Key vault granted my web app using the identities, but it was this particular call that had caused my issues: ConfigurationManager.ConnectionStrings[1].Name. The purpose was to get the name of the connectionStrings to feed into my DBContext. Turns out that this didn't return the value I had expected in the production. After some changes the code, the problem is solved!

Related

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

Getting started with Vault for existing non-containerized Windows apps

We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.
How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.
For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit

How to setup google service account authorization in Node.js with JSON key file?

Trying to make use of the Server to Server OAuth flow defined here:
https://developers.google.com/identity/protocols/OAuth2ServiceAccount
Since I'm running from a local dev environment, I've created a service account in GCP and downloaded the JSON file with the private key, but cannot find any Node.js code examples on how to:
1) load the json file
2) set delegated credentials (for G Suite domain-wide authorization)
Places I've looked (besides stackoverflow) are Google's git wiki for the node.js client library, which does talk about server to server auth, but seems to assume you're running from appengine or google cloud and don't need to load a key file:
https://github.com/googleapis/google-api-nodejs-client#service-to-service-authentication
The Admin SDK Activities Reports API has a Node example, but it's using the web-based flow assuming a user is present:
https://developers.google.com/admin-sdk/reports/v1/quickstart/nodejs
Buried deep in the Node.js samples is use of the Directory API, which does seem to take a keyfile as input, but when I try running locally it says getClient is not a constructor, and still this example doesn't show how to set the G Suite admin user for context (which is generally when a refresh token and access token are loaded into the app):
https://github.com/googleapis/google-api-nodejs-client/blob/master/samples/directory_v1/group-delete.js
So... does anybody have an example of this? I really don't want to switch to a Python runtime but Google seems to have left out important examples on this topic.

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

Deploy a business network on bluemix

I use this tutorial to deploy a business network on a free bluemix cluster: https://ibm-blockchain.github.io/
I also deploy the REST Server and communicate via Web apps.
All went fine till yesterday. The REST Server was not accessible anymore.
I deleted everything on the cluster using the script delete_all available in the ibm-container-service repository.
I followed the install procedure using the create_all script. I could access the composer playground (port 31080) again but was not really able to deploy an online business network using the "profile" hlfv1. Now it asks at the bottom of the "deploy UI" for credentials.
I don't know what to fill in. I tried to use ID+Password. On this way I was able to deploy but I got access error by clicking on "connect now". I was able to start the REST server then but if i try to access it in the browser (port 31090), I get the feedback that I'm not authorized.
Any ideas?
And do you know which changes have been made in the last month, which could bring these troubles?
Thx
Phil
The tutorial pointed to only covers playground when used with a Web Browser connection not a real fabric. When you deploy to a real fabric you have to provide an initial identity that you want bound to an initial participant in the business network. The initial participant will be of type org.hyperledger.composer.system.NetworkAdmin and given a name of the initial identity name you provide.
This dialog looks like this
To get you started you should select the ID and Secret radio button. Then for Enrollment ID enter admin and for the Enrollment Secret enter adminpw.
This is the name and secret of the bootstrap identity that exists in the fabric-ca server that has been deployed as part of the scripts.
By providing this information that identity will be enrolled and it's public certificate will be bound to a NetworkAdmin participant which will be called admin. This identity admin will then have access to the business network as only identities that are bound to a participant in the business network can have any sort of access.