How to get the activation ID of the action invoked in OpenWhisk? - ibm-cloud

When we invoke an action through the CLI, we get the activation ID as the result. But when we generate the API for the action in Bluemix and try to invoke the API, I receive only the result of the action. How can we get the activation ID of the action after the invocation? Should we be able to get the response later by using the activation ID?

An action, in its execution context has its activation id available: it's available in the environment variables as __OW_ACTIVATION_ID.
You can return this value in your response - if you're using a web action or API gateway and have the ability to send custom headers as a result you can use that as a mechanism to return the id. Or simply return the id itself.
Given an activation id, you can use it later with the activation API to retrieve the result.
It sounds like you want a non-blocking activation as opposed to request/response style. For that if you aren't using a webaction or API gateway, the default invoke mechanism is non blocking which return to you the activation id.
Here is a reference to the API https://github.com/apache/incubator-openwhisk/blob/master/docs/rest_api.md

If you are invoking from the CLI using the following, you should get back the activation ID and the result:
wsk action invoke --blocking the-action-name
You can get a list of activations ordered from the most recent to the oldest:
wsk activation list
ThereĀ“s a very nice documentation with a bunch of details and using diff languages --> https://console.ng.bluemix.net/docs/openwhisk/openwhisk_actions.html#openwhisk_actions_polling

Related

Is there a way to capture the name of a task that has been executed in SnapLogic?

We have a lot of triggered Tasks that run on the same pipelines, but with different parameters.
My question regarding this - is there a possible way, like a function or an expression to capture the name of the triggered task so that we could use the information while writing the reports and e-mails of which Task started the error pipeline. I can't find anything even close to it.
Thank you in advance.
This answer addresses the requirement of uniquely identify the invoker task in the invoked pipeline
For triggered tasks, there isn't anything provided out of the box by SnapLogic. Although, in case of ULTRA tasks you can get $['task_name'] from the input to the pipeline.
Out of the box, SnapLogic provides the following headers that can be captured and used in the pipeline being initiated by the triggered task. These are as follows.
PATH_INFO - The path elements after the Task part of the URL.
REMOTE_USER - The name of the user that invoked this request.
REMOTE_ADDR - The IP address of the host that invoked this request.
REQUEST_METHOD - The method used to invoke this request.
None of these contains the task-name.
In your case, as a workaround, to uniquely identify the invoker task in the invoked pipeline you could do one of the following three things.
Pass the task-name as a parameter
Add the task-name in the URL like https://elastic.snaplogic.com/.../task-name
Add a custom header from the REST call
All the three above methods can help you capture the task-name in the invoked pipeline.
In your case, I would suggest you go for a custom header because the parameters you pass in the pipeline could be task-specific and it is redundant to add the task-name again in the URL.
Following is how you can add a custom header in your triggered task.
From SnapLogic Docs -
Custom Headers
To pass a custom HTTP header, specify a header and its value through the parameter fields in Pipeline Settings. The
request matches any header with Pipeline arguments and passes those to
the Task, while the Authorization header is never passed to the
Pipeline.
Guidelines
The header must be capitalized in its entirety. Headers are case-sensitive.
Hyphens must be changed to underscores.
The HTTP custom headers override both the Task and Pipeline parameters, but the query string parameter has the highest precedence.
For example, if you want to pass a tenant ID (X-TENANT-ID) in a
header, add the parameter X_TENANT_ID and provide a default or leave
it blank. When you configure the expression, refer to the Pipeline
argument following standard convention: _X_TENANT_ID. In the HTTP
request, you add the header X-TENANT-ID: 123abc, which results in the
value 123abc being substituted for the Pipeline argument X_TENANT_ID.
Creating a task-name parameter in the pipeline settings
Using the task-name parameter in the pipeline
Calling the triggered task
Note: Hyphens must be changed to underscores.
References:
SnapLogic Docs - Triggered Tasks
I'm adding this as a separate answer because it addresses the specific requirement of logging an executed triggered task separate from the pipeline. This solution has to be a separate process (or pipeline) instead of being part of the triggered pipeline itself.
The Pipeline Monitoring API doesn't have any explicit log entry for the task name of a triggered task. invoker is what you have to use.
However, the main API used by SnapLogic to populate the Dashboard is more verbose. Following is a screenshot of how the response looks on Google Chrome Developer Tools.
You can use the invoker_name and pipe_invoker fields for identifying a triggered task.
Following is the API that is being used.
POST https://elastic.snaplogic.com/api/2/<org snode
id>/rest/pm/runtime
Body:
{
"state": "Completed,Stopped,Failed,Queued,Started,Prepared,Stopping,Failing,Suspending,Suspended,Resuming",
"offset": 0,
"limit": 25,
"include_subpipelines": false,
"sort": {
"create_time": -1
},
"start_ts": null,
"end_ts": null,
"last_hours": 1
}
You can have a pipeline that periodically fires this API then parses the response and populates a log table (or creates a log file).

Lambda + API Gateway: Long executing function return before task finished

I have a function in Lambda that executes for up to 30s depending on the inputs.
That function is linked to an API gateway so I am able to call it through POST, unfortunatly the API Gateway is limited to 30s exaclty, so if my function runs for longer I get an "internal server error" back from my POST.
I see 2 solutions to this:
Create a "socket" rather then a simple POST which connects to the lambda function, not sure if this is even possible or how would this work.
Return a notification to api gateway before the lambda function is actually finished. This is acceptable for my use case, but I am not sure again how would this work.
The function is coded in Python3.6 I therefore have access to the event and context object.
Return a notification to api gateway before the lambda function is actually finished. This is acceptable for my use case, but I am not sure again how would this work.
Unfortunately, you will not be able to return a result until the Lambda is finished. Otherwise, AWS will interrupt execution of Lambda if you try to do it, for example, via multithreading.
I suggest you create the following system:
API Gateway puts a request to Kinesis stream and returns a response of successfully putting. The response will contain SequenceNumber of the Kinesis Record. (How to do it)
Lambda is invoked by Kinesis Stream Event, processes the request and saves that your job is done (or completed result) to DynamoDB with SequenceNumber as id.
You invoke another API Gateway method and SequenceNumber to check the status of your job in DynamoDB (if necessary).
If you need just to run the Lambda without knowing of job result you can avoid using DynamoDB.
statut approach is nice, but I'd like to give you an alternative: AWS step functions. Basically it does what statut's solution does but it might be more easy to deploy.
All you have to do is to create an AWS Step Functions state machine that has your lambda function as its only task. AWS Step functions intrinsically works asynchronously, and has methods to know the status and, eventually, the result. Check the docs for more info.
A very simple solution would be for your lambda function to just asynchronously invoke another lambda function and then return a result to API Gateway.
Here's a real example of executing one lambda function asynchronously from another lambda function:
import json
import boto3
def lambda_handler(event, context):
payload = {}
##take the body of the request and convert to string
body = json.loads(event['body'])
##build new payload with body and anything else your function downstream needs
payload['body'] = json.dumps(body)
payload['httpMethod'] = 'POST'
payloadStr = json.dumps(payload)
client = boto3.client('lambda')
client.invoke(
FunctionName='container name',
InvocationType="Event",
Payload=payloadStr
)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
In this example, the function invokes the other one and returns right away.
invoke documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.invoke

what API Gateway methods support Authorization?

When I create a resource/method in AWS API Gateway API I can create one of the following methods: DELETE, GET, HEAD, OPTIONS, PATCH or POST.
If I choose GET then API Gateway doesn't pass authentication details; but for POST it does.
For GET should I be adding the cognito credentials to the URL of my GET? or just never use GET and use POST for all authenticated calls?
My set-up in API Gateway/Lambda:
I created a Resource and two methods: GET and POST
Under Authorization Settings I set Authorization to AWS_AIM
For this example there is no Request Model
Under Method Execution I set Integration type to Lambda Function and I check Invoke with caller credentials (I also set Lambda Region and Lambda Function)
I leave Credentials cache unchecked.
For Body Mapping Templates, I set Content-Type to `application/json' and the Mapping Template to
{ "identity" : "$input.params('identity')"}
In my Python Lambda function:
def lambda_handler(event, context):
print context.identity
print context.identity.cognito_identity_id
return True
Running the Python function:
For the GET context.identity is None
For the POST context.identity has a value and context.identity.cognito_identity_id has the correct value.
As mentioned in comments: all HTTP methods support authentication. If the method is configured to require authentication, authentication results should be included in the context for you to access via mapping templates to pass down stream as contextual information.
If this is not working for you, please update your question to reflect:
How your API methods are configured.
What your mapping template is.
What results you see in testing.
UPDATE
The code in your lambda function is checking the context of the Lambda function, not the value from API Gateway. To access the value passed in from API Gateway, you would need to use event.identity not context.identity.
This would only half solve your problem as you are not using the correct value to access the identity in API gateway. That would be $context.identity.cognitoIdentityId (assuming you are using Amazon Cognito auth). Please see the mapping template reference for a full guide of supported variables.
Finally, you may want to consider using the template referenced in this question.

How can I call Sling Filter before AuthenticationHandler?

I want to put a sling filter before the authentication handler, but I have no luck.
From the logs I can see that the authandler always called after my filter. Is there a good documentation about this? Is it possible to put a filter before the authenticationhandler?
Both works when I put logging to the authandler's extractCredentials method and to the doFilter method of Filter. But unfortunately my Filter is called after the authandler.
Here is my logs:
11:50:55.924 AuthenticationHandler extractCredentials
11:50:56.004 Before chain.doFilter
11:50:56.332 After chain.doFilter
Authentication is always done before the filter processing:
Request level
Authentication
Resource Resolution
Servlet/Script Resolution
Request Level Filter Processing
(source: Sling documentation).
So, you can't create a filter that would be run before the authentication.
You can use an OSGI Preprocessor, it will act as a filter before authentication is called, you will find the specification and one example here:
https://docs.osgi.org/specification/osgi.cmpn/7.0.0/service.http.whiteboard.html#service.http.whiteboard.servlet.preprocessors

Passing custom parameters in paypal express checkout NVP

I'm trying to find a way to pass a custom parameter through paypal's express checkout using NVP.
I've tried using the deprecated PAYMENTREQUEST_n_CUSTOM, the supposedly not deprecated PAYMENTREQUEST_0_CUSTOM and the CUSTOM parameters but none of them worked.
The only ways I can see right now (which I'd rather not use) are:
1. use one of the other parameters that I'm not using (like shipping)
2. use the return url and add to it the parameter as a GET parameter
3. use sessions.
According to the error page my version is 92.0.
And the rest of the parameters are:
$nvpstr="&SHIPPINGAMT=0&L_SHIPPINGOPTIONNAME0=test&L_SHIPPINGOPTIONAMOUNT0=0&REQCONFIRMSHIPPING=0&NOSHIPPING=1&L_SHIPPINGOPTIONISDEFAULT0=true&ADDRESSOVERRIDE=1$shiptoAddress&".
"&ALLOWNOTE=0&CUSTOM=".$CUSTOM.
"&L_NAME0=".$L_NAME0."&L_AMT0=".$L_AMT0."&L_QTY0=".$L_QTY0.
"&MAXAMT=".(string)$maxamt."&AMT=".(string)$amt."&ITEMAMT=".(string)$itemamt.
"&CALLBACKTIMEOUT=4&CALLBACK=https://www.ppcallback.com/callback.pl&ReturnUrl=".$returnURL."&CANCELURL=".$cancelURL .
"&CURRENCYCODE=".$currencyCodeType."&PAYMENTREQUEST_0_PAYMENTACTION=".$paymentType;
Don't mix PAYMENTREQUEST_0_* variables and their deprecated counterparts -- use one or the other. (E.g., don't use PAYMENTREQUEST_0_PAYMENTACTION and AMT in the same API call -- either use PAYMENTREQUEST_0_PAYMENTACTION and PAYMENTREQUEST_0_AMT, or use PAYMENTACTION and AMT.)
This appears to be the SetExpressCheckout call. You can pass a CUSTOM value in here, but if you do, the only place it will show up is in the response to your GetExpressCheckoutDetails call. You need to supply the CUSTOM value in your DoExpressCheckoutPayment call in order for it to be recorded to your account.