Can I use credentials in Nuclio functions? - mlops

I am using Nuclio functions and I need to provide credentials in the function for things like accessing database etc. Is there a way to store these credentials securely (not plain text) ?

You can use Kubernetes secret in an Nuclio function. There are several steps to set this up.
create a Kubernetes secret, simple example using kubectl like this:
kubectl create secret generic db-user-pass --from-literal=username=devuser --from-literal=password='<A-Password-Here>'
Then create an Nuclio function with set_env_from_secret like this:
fn = mlrun.code_to_function("nuclio-with-secret", kind='nuclio', image='mlrun/mlrun', handler="handler") fn.set_env_from_secret("a-secret-name", "db-user-pass", "password") fn.apply(mlrun.auto_mount()) fn.deploy()
In your Nuclio function, you can use the secret like this:
the_secret_inside_nuclio_to_use = os.getenv('a-secret-name')

Related

Single use secret in Hashicorp vault

Is it possible with any secret engine to have a "single use" password?
I have a command that generates an rsa keypair for users, and would like them to retrieve their private key. I can obviously print it out, or write to file etc, but thought it would be nice if it was stored in a "single use" place in vault? Then the user could retrieve it via the UI, and know that no-one else has viewed it. If someone else viewed it they would need to regenerate.
Basically can we have a vault key that can only be read once?
You can create a policy that only has access to that secret, for example
# policy: rsa
path "secret/rsa" {
capabilities = ["read"]
}
and then create a wrapped token for that policy, for example
$ vault token create -policy=rsa -num_uses=1 -wrap-ttl=120
Key Value
--- -----
wrapping_token: s.9QFJ8mRxGJD0e7kFfFIbdpDM
wrapping_accessor: S0zKNUr2ENbnCtj0YyriO31b
wrapping_token_ttl: 2m
wrapping_token_creation_time: 2019-12-17 09:45:42.537057 -0800 PST
wrapping_token_creation_path: auth/token/create
wrapped_accessor: VmBKXoc19ZLZlHGl0nQCvV6r
This will generate a wrapped token.
You can give that to your end user and they can unwrap it with
VAULT_TOKEN="s.3Kf3Xfn58Asr3bSDkRXATHrw" vault unwrap
which will generate a token.
With that token, the user will be able to login to vault and retrieve the rsa creds only once since the token will be invalid afterwards.
You can now guarantee that the creds have only been used from the target user as the wrapped token can be unwrapped only once.
Note: you might need to adjust num_uses when you create the token if your end user goes through the UI as the UI might use the token to perform more than one actions.
more info

How to add query parameter to routes in Lumen?

I am trying to know how to add query parameters to routes in Lumen
this is an example of a route I created
$app->get('/product/{apikey}','ProductController#getProduct');
This works when I use
http://api.lumenbased.com/product/10920918
but I would like to use it like this
http://api.lumenbased.com/product/?apikey=10920918
I tried this
$app->get('/product/?apikey={apikey}','ProductController#getProduct');
But this gives me MethodNotAllowedHttpException
I would like to know how to write routes with query parameters in Lumen ?
Just do:
$app->get('/product','ProductController#getProduct');
and use:
$request->get('apikey')
in the ProductController#getProduct function.
(That said, validating an API key is better done via middleware...)

How to do Flask REST api key validation without decorators?

I'm new to Flask and I'm trying to accomplish the following:
For all of the subroutes of a particular route I want to extract a parameter and validate that an api key has been provided either by GET param or Header key.
The ideal would be if I could nest blueprints. Then I would do something like following:
Have a main blueprint to pull the parameter and validate the api key:
#secured_api.url_value_preprocessor
def pull_tenant(endpoint, values):
g.tenant_code = values.pop('tenant')
#secured_api.before_request
def validate_api_key():
api_key = request.headers.get('X-Api-Key')
...
if (api_key is None):
raise InvalidApiKey()
Then, having another blueprint with my resources (v1_bp) I could do:
secured_api.register_blueprint(v1_bp, url_prefix="/v1")
app.register_blueprint(secured_api, url_prefix='/secured/<tenant>')
So that all of v1_bp routes would be under /secured//v1
What would be the best way to achieve this?
Thanks in advance!

what API Gateway methods support Authorization?

When I create a resource/method in AWS API Gateway API I can create one of the following methods: DELETE, GET, HEAD, OPTIONS, PATCH or POST.
If I choose GET then API Gateway doesn't pass authentication details; but for POST it does.
For GET should I be adding the cognito credentials to the URL of my GET? or just never use GET and use POST for all authenticated calls?
My set-up in API Gateway/Lambda:
I created a Resource and two methods: GET and POST
Under Authorization Settings I set Authorization to AWS_AIM
For this example there is no Request Model
Under Method Execution I set Integration type to Lambda Function and I check Invoke with caller credentials (I also set Lambda Region and Lambda Function)
I leave Credentials cache unchecked.
For Body Mapping Templates, I set Content-Type to `application/json' and the Mapping Template to
{ "identity" : "$input.params('identity')"}
In my Python Lambda function:
def lambda_handler(event, context):
print context.identity
print context.identity.cognito_identity_id
return True
Running the Python function:
For the GET context.identity is None
For the POST context.identity has a value and context.identity.cognito_identity_id has the correct value.
As mentioned in comments: all HTTP methods support authentication. If the method is configured to require authentication, authentication results should be included in the context for you to access via mapping templates to pass down stream as contextual information.
If this is not working for you, please update your question to reflect:
How your API methods are configured.
What your mapping template is.
What results you see in testing.
UPDATE
The code in your lambda function is checking the context of the Lambda function, not the value from API Gateway. To access the value passed in from API Gateway, you would need to use event.identity not context.identity.
This would only half solve your problem as you are not using the correct value to access the identity in API gateway. That would be $context.identity.cognitoIdentityId (assuming you are using Amazon Cognito auth). Please see the mapping template reference for a full guide of supported variables.
Finally, you may want to consider using the template referenced in this question.

How to set aws access key and aws secret key inside spark-shell

Can you let me know the best way to set aws access key and aws secret key while inside spark-shell. I tried setting it using
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", MY_ACCESS_KEY)
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", MY_SECRET_KEY)
and got
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively)
I am able to get it to work by passing it as part of the url
s3n://MY_ACCESS_KEY:MY_SECRET_KEY#BUCKET_NAME/KEYNAME
after replacing the slashes in my secret key with %2F but wanted to know if there was an alternative to embedding my access key and secret key in the url.
in Addition to Holden's answer, here's amore specific example:
val jobConf = new JobConf(sparkContext.hadoopConfiguration)
jobConf.set("fs.s3n.awsAccessKeyId", MY_ACCESS_KEY)
jobConf.set("fs.s3n.awsSecretAccessKey", MY_SECRET_KEY)
val rdd = sparkContext.hadoopFile(jobConf, ...)
You can use the hadoopRDD function and specify the JobConf object directly with the required properties.