Is it possible to send query parameters via POST or GET to a Google Colab notebook? (And also have the response be either plaintext or structured json)
How to retrieve the query in Colab?
How do you sanitize or suppress the other output so that only plaintext or json is returned to the endpoint call?
You can make direct HTTP requests to the backend from FE Javascript. Here's an example notebook.
Reproducing the key bits:
A webserver can be started on the kernel to serve up arbitrary
resources.
The client needs to reference the resource with
https://localhost:{port} but this will automatically be translated
to http://localhost:{port}.
By default responses will be cached in the notebook for offline
access.
Related
I want to get list of documents available in Qlik sense using rest api. I am trying to use the api url https://url/api/v1/apps/docs to get the list of documents
But this is not working. Is the url correct in getting the documents in qliksense?
Where can I find the details on the url for getting the docs? I have checked the Qlik website for rest documentation but could not get the details what I am looking for.
Thanks
Not sure what your use case is but will recommend using Qlik Repository Service API to get list of the apps. The url in your question looks like an Engine REST API endpoints but its non-existing in the endpoints list.
Repositository API is a wrapper around the internal PostgreSQL database, which contains all the metadata (list of apps, streams, extensions etc).
List with all Repository API methods can be found at Qlik QRS API reference page.
The Repository API supports few authentication methods
certificates (when using in server-to-server communication. aka backend)
JWT
Header
Session cookie (from the browser)
Have a look at the examples how to test the responses with Postman (few other examples are available there - Powershell, Node.JS, cURL etc.)
tried to get the cookie value using post method http call from REST-API so that can pass it to load .xml file, URL is tested on postman and insomnia tool was able to get the cookie value but in the data factory unable to get it.
As per this thread on GitHub - "Cookie" is not allowed in REST connector, but it is allowed in HTTP connector. You can try that if it can meet your requirements.
You can check if this might help you - How to Dynamically adding HTTP endpoint to load data into azure data lake by using Azure Data Factory and the REST api is cookie autheticated
I don't mind if you use an example from another API that is not Adobe Analytics'. I just need to know the pattern that I have to follow in order to succesfully convert a Postman request into a NiFi request.
After successfully creating requests to pull reports from Adobe Analytics via Postman, I´m having difficulties to migrate these Postman requests to NiFi. I haven´t been able to find concrete use cases that explicity explain how to do this kind of task step-by-step.
I'm trying to build a backend on top of NiFi to handle multiple data extracts from Adobe Analytics in an efficient and robust way. That is instead of having to create all required scripts by myself. Yet, there is more documentation about REST APIs and Postman cases than there is about REST APIs and NiFi cases.
In the screenshot below we can see how the Postman request looks like. It takes 3 headers and 1 temporary header that includes the authorization value (Bearer token). This temporary header is generated automatically after filling in the OAuth 2.0 authorization form in the Authorization tab, as shown here.
Then, we have the body of the request. This json text is generated automatically by debugging Adobe Analytics' workspaces as shown here.
I'd like to know the following in a step-by-step manner with screenshots if possible:
Which processor(s) should I use in NiFi to obtain a similar response as the one I got in Postman?
Which properties should I add/remove from the processor to make this work?
How should I name these properties?
Is there a default property whose value/name I should modify?
As you can see, the question mainly refers to properties setup in NiFi, as well as Processor selection. I already tried to configure some processors but I don't seem to get the correct properties setup, or maybe I'm selecting the wrong processors.
I'm using NiFi v1.6.0 and Postman v7.8.0
This is most likely an easy task for users already familiar with NiFi and API requests, but it has proven challenging to me. Hopefully this will help other users looking to build more robust pipelines by using NiFi instead of doing it manually.
Thanks.
It only takes 3 NiFi processors to replicate a REST API request that works in Postman. In this solution we use a request that contains a nested JSON request. The advantage of this simple approach is that it reduces the amount of configuration required to obtain a successful response from the API. That is, even if you are using a complex JSON request. In this case the body of the JSON request is passed through the GenerateFlowFile processor, without the need of any other processor to parse/format the request.
Step #1. Create a processor called GenerateFlowFile. The only property that you will have to modify is the Custom Text. Paste in there your whole JSON request just as it was in Postman. In this case I'm using the very same JSON shown in the question above. It's a good idea to setup Yield Duration to 10 seconds or more.
Step #2. Create a processor called InvokeHTTP. Then modify the 6 properties shown in the screenshots below. Use the same Authorization details you've used in Postman. Make sure to copy the Bearer token from Postman after it has been tested. Also, don't forget to setup the HTTP Method, Remote URL and Content-Type as well.
Step #3. Finally, add a couple of LogAttribute processors to store the output of InvokeHTTP. One of these LogAttribute processors should store successful responses. The other one can be used for Failure, Original, Retry and No-Retry. Or you can create LogAttribute for each of these outputs.
Step #4. Now, connect the processors and Start your data flow! You should start seeing data populate the Successful LogAttribute. Then you can use the Data Provenance option to review the incoming data and confirm that this is exactly the same result you previously obtained from Postman.
Note: This is a simple, straightforward, "for starters" solution to replicate a Postman API request using a nested static JSON. There are more solutions in StackOverflow that tackle more complex cases, like dynamic JSON. Here's a list of some other posts:
nifi invokehttp post complex json
In NiFi processor 'InvokeHTTP' where do you write body of POST request?
Configuring HTTP POST request from Nifi
I need to process MailGun webhooks. I did implement a solution directly on our web servers to process the webhooks, but MailGun generates so many calls from a large campaign that it effectively becomes a DOS attack.
One solution I've been looking at is using AWS API Gateway to a Lambda function to then push onto an SQS queue. We can then poll the queue at a rate we can manage. Unfortunately we can't get this to work as AWS API Gateway does not support multipart/form-data content types (which some of the webhooks are). This means that our SQS messages are not well formatted / structured. The best we can do is use the $util.escapeJavaScript($input.body) function in the mapping template to create an SQS message that contains the raw string of the webhook content (with escaped javascript chars) that is effectively unparsable i.e. we can't get data out of it.
I've had a go at using Zapier to process the webhook and push directly on the SQS queue. This can parse the various content types effectively and create a nicely structured message for us, but the cost of the service is not viable.
Has anybody managed this problem in another way? Are there solutions to API Gateway not parsing the content properly? I've deliberately stayed away from MailGuns event polling API as it involves significant delays before the polled data can be 'trusted' (according to MailGun).
Basically, is there another way of getting a nicely parsed message from content types multipart/form-data and application/x-www-form-urlencoded onto the queue?
Any ideas would be much appreciated!
To add, this link higlights issues with APS Gateway and multipart\form-data content:
API Gateway - Post multipart\form-data
As you've mentioned you can base64 encode in api gateway and call base64decode in the lambda function to retrieve the original payload (There are standard libraries in every language).
Also, note you can that you can use multipart form data for non file bodies.
Get non file body from multipart/form-data using AWS API Gateway and Lambda
I had the same challenge when building Suet. I ended up switching to Google Cloud functions which I really recommend. Don't waste time on Amazon API Gateway. Use Google Cloud Functions and use a middleware like multer. (You can see the source of Suet's webhook handler here).
Not sure if you ever came to a solution, but I have this working with the following settings.
Setup your API Gateway method to use "Use Lambda Proxy integration"
In your lambda (I use node.js) use busboy to work through the multi-part submission from the mailgun webhook. (use this post for help with busboy Busboy help)
Make sure that any code you are going to execute after all busboy is complete is executed in the 'finish' portion of the busboy code.
I am working on a project to upload objects to S3 using java code. There are some external restrictions that limit my implementation and overall I'm not sure if S3 supports what I'm trying to do.
The restrictions are:
Use V4 authentication
header authentication, not query parameter
REST API, not AWS java SDK
Payload is not hashed (no SHA-256)
That last requirement is because we have hardware support that streams the data directly from storage, so the driving code never touches the data.
Apparently with query parameter authentication I can substitute 'UNSIGNED-PAYLOAD' for the payload hash, but not so with header based authentication.
So my question is whether or not there is any way to upload an object to S3 using the REST API, v4 signature and no hash (SHA-256 or other) on the data itself.
Thanks!
No, according to this post on Amazon's forums:
Re: https://forums.aws.amazon.com/message.jspa?messageID=573632
UNSIGNED-PAYLOAD can be used only with a query-string authentication.
If you use Authorization header authentication, it cannot be used. As
an option, you can use chunked transfer, so will have to calculate
hashes for small chunks of data than can be buffered for hashing.
Also, you can still use older Signature V2 , though it won't work with
regions created after 30-jan-2014.
It looks like you can do this with v2 signatures using the header method but, as mentioned above, only to endpoints created before Jan 30th, 2014.
See: http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationStringToSign
You can upload files using POST and it does not require payload hash. But with POST file size is limited to 5GB.
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-authentication-HTTPPOST.html