We have created secrets and scopes in datababricks and we are using dbUtils to access those secrets and scopes.
dbUtils.getSecret(SCOPE,KEY)
Unfortunately, this dbUtils function is not working on development environment, so I need advice on how to access this and what are the best possible alternative to get rid of this ?
If you use Databricks Connect, then secrets API is disabled by default, but it could be enabled on request. It's described in the documentation:
Because of security restrictions, the ability to call dbutils.secrets.get is disabled by default. Contact Databricks support to enable this feature for your workspace.
But really, for tests it should be mocked, and it should be only really used when you're debugging something.
Related
I have created a ServiceBus namespace in Azure, along with a topic and a subscription. I also have a simple Azure version 1 function that triggers on a received topic in the ServiceBus, like this:
[FunctionName("MyServiceBusTriggerFunction")]
public static void Run([ServiceBusTrigger("myTopic", "mySubscription", Connection = "MyConnection")]string mySbMsg, TraceWriter log)
{
log.Info($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
The function triggers nicely for the topics in the ServiceBus when I define the connection string in functions Application Settings by using Shared Access Policy for topic, like this:
Endpoint=sb://MyNamespace.servicebus.windows.net/;SharedAccessKeyName=mypolicy;SharedAccessKey=UZ...E0=
Now, instead of Shared Access Keys, I would like to use Managed Service Identity (MSI) for accessing the ServiceBus. According to this (https://learn.microsoft.com/en-us/azure/active-directory/managed-service-identity/services-support-msi) it should be possible, unless I have misunderstood something. I haven't managed to get it working though.
What I tried, was to
set the Managed Service Identity "On" for my function in Azure portal
give Owner role for the function in ServiceBus Access Control section in Azure Portal
set the connection string for MyFunction like this: Endpoint=sb://MyNamespace.servicebus.windows.net/
The function is not triggering in this set-up, so what am I missing or what am I doing wrong?
I'd be grateful for any advice to help me get further. Thanks.
Update for Microsoft.Azure.WebJobs.Extensions.ServiceBus version 5.x
There is now an offical docs for the latest version of the package in here.
{
"Values": {
"<connection_name>__fullyQualifiedNamespace": "<service_bus_namespace>.servicebus.windows.net"
}
}
Previous answer:
This actually seems to be possible now, at least worked just fine for me. You need to use this connection string:
Endpoint=sb://service-bus-namespace-name.servicebus.windows.net/;Authentication=ManagedIdentity
I have not actually found any documentation about this on Microsoft site, but in a blog here.
Microsoft does have documentation though on roles that you can use and how to limit them to scope in here. Example:
az role assignment create \
--role $service_bus_role \
--assignee $assignee_id \
--scope /subscriptions/$subscription_id/resourceGroups/$resource_group/providers/Microsoft.ServiceBus/namespaces/$service_bus_namespace/topics/$service_bus_topic/subscriptions/$service_bus_subscription
what am I missing or what am I doing wrong?
You may mix up with MSI and Shared Access Policy.They are using different provider to access to Azure servicebus. You could just use connectionstring or just use MSI to authenticate.
When you use Managed Service Identity(MSI) to authenticate, you need to create a token provider for the managed service identity with the following code.
TokenProvider.CreateManagedServiceIdentityTokenProvider(ServiceAudience.ServiceBusAudience).
This TokenProvider's implementation uses the AzureServiceTokenProvider found in the Microsoft.Azure.Services.AppAuthentication library. AzureServiceTokenProvider will follow a set number of different methods, depending on the environment, to get an access token. And then initialize client to operate the servicebus.
For more details, you could refer to this article.
When you use servicebus connectionstring to access which using the Shared Access Token (SAS) token provider, so you can operate directly.
Agreed that from azure function we cannot access the resource like ASB directly. However, one still does not need to put in the password in this case "SharedAccessKeyName" in the connectionstring directly.
Azure function can work with Azure KeyVault. Thus one can store the connectionstring with sensitive information as a secret in the KeyVault and then grant System assigned identity from azure functions access over KeyVault and then specify the value for the settings in the portal as
#Microsoft.KeyVault(SecretUri={theSecretUri})
Details on how to achieve the above is mentioned in the following blog.
https://medium.com/statuscode/getting-key-vault-secrets-in-azure-functions-37620fd20a0b
This will still avoid specifying the connectionstring directly in Azure functions and provides with single point of access via Vault to be disabled in case of a security breach
I've deployed buildbot in cloud vms, docker, and such. I've been able to setup authentication, but could not disable anonymous access.
It so happens that, I really can't allow anonymous access since it is a private owned resource, worst of all in many logs from build steps, passwords and other sensitive information show up.
buildbot version: 0.9.8
Documentation is scarse/nonexistant on this subject.
Thanks in advance.
Buildbot itself only allows to disable access to REST API. So anonymous users will see 'empty' web interface with no builds, logs etc. Access to the web interface can be disabled only by external web server settings.
Example authz config:
c['www']['authz'] = util.Authz(
allowRules=[
util.AnyEndpointMatcher(role='admins', defaultDeny=False),
util.AnyControlEndpointMatcher(role='admins', defaultDeny=False),
util.AnyEndpointMatcher(role='anonymous')
],
2.5.12.5. Authorization rules
One can implement the default deny policy by putting an AnyEndpointMatcher with nonexistent role in the end of the list. Please note that this will deny all REST apis, and most of the UI do not implement proper access denied message in case of such error.
I have configured my service fabric services to use Azure Key Vault for configuration. If, after the app is deployed, I change the config in Key Vault, how do I then restart the affected service so it can pick up the new config value?
Or is there another way altogether?
The best way to handle configuration on SF is use your application parameters file for this, if you use a continuous deployment pipeline like VSTS, you could use release variables to set these values for you and deploy a new version of your configuration file and let SF do the rest.
But in case you still need to use Key vault:
if you are using asp.net core, Using Azure Key Vault to store secrets are like loading configuration files, the values are cached until you reload it.
You can use the IConfigurationRoot.Reload() to reload the secrets from your key vault new values. Check it Here
The trick now is to make it automatically you have to:
Enable Key Vault Logging to track the changes, this will emit logs once you update the key vault. check it here and here .
And then:
Create an endpoint in your API to be called and refresh the secrets. Make it secure to avoid abuse.
Create an Azure function to process these logs and trigger the endpoint
Or:
Create a message queue to receive the command and the system read the message to refresh the settings
Or:
Make a timer to refresh on specific periods(I would not recommended this approach because you might end up with outdated config, but it is easy and useful for quick test scenarios, not production)
Or if you prefer more custom designed solution, you could create your own ConfigurationProvider based on KeyVault and do the cache logic according to your app architecture and you don't have to bother with the rest. Please refer to the Asp.Net source here for this.
The documented way to provide configuration to your services is by using the 'configuration' part of your application package.
As this is versioned, it can be upgraded, without requiring your services to be upgraded or even be restarted.
More info here and here.
Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.
I have a console app in dotnet core. I use appsettings.Development.josn and appsettings.Staging.json for dev and staging environment but for the production environment i use the UserSecrets. I have two problem when the app is running on production env it does not create UserSecrets in the %Appdata%/Microsoft so I have to make it manually and then it starts to work.
Another part of my question is this:
today I found out that microsoft wrote here
The Secret Manager tool is used only in development. You can safeguard Azure test and production secrets with the Microsoft Azure Key Vault configuration provider. See Azure Key Vault configuration provider for more information.
I dont have Azure. What can I use in production if I am not supposed to use the UserSecrets.
While environment variables are one of the most used options in web development and The Twelve Factor App documents states: "Store config in the environment" there are some reasons why this may not be the best approach:
the environment is implicitly available to the process and it's hard to track access. As a result, for example, you may face with situation when your error report will contain your secrets
The whole environment is passed down to child processes (if not explicitly filtered). So your secret keys are implicitly made available to any 3rd-party tools that may be used.
All this are one of the reasons why products like Vault become popular nowadays.
So, yes, you may use environment variables, but be aware)
For storing secure data in your app, if you're using Azure, So Azure KeyValut is your answer, you can see Azure Microsoft Azure Key Valut,
In case you're using K8S, you can store it on CSI driver
Or system OS environment variables