Amplitude Analytics Dashboard REST API Event object - amplitude-analytics

Using the Amplitude Dashboard REST API, I am attempting to get a count of all unique times a custom event was triggered with a filter of a custom event property. However I am unable to even get the simplest event segmentation to run, though all other endpoint hits (except Funnels which also use the e event parameter) are working as expected. In other words, my auth is working and I'm able to successfully get data from all the endpoints that don't require the e event parameter.
Here is an example of a constructed endpoint using event segmentation that is as simple as I believe it could possible be and which is
failing with a 400 error.
https://amplitude.com/api/2/events/segmentation?e=\{"event_type":"_active"\}&start=20170401&end=20170402
While the call I want to do is ultimately more complex and involves a filter, I'm unable to just get this call which is one of the simplest event segmentation calls possible, considering the e, start, and end parameters are all required.

Try properly percent escaping the e parameter.
https://amplitude.com/api/2/events/segmentation?e=%7B%22event_type%22%3A%22_active%22%7D&start=20170401&end=20170402

Related

429 error trying to hit an API from within an Azure Data Factory For Each activity

We're trying to put together a proof-of-concept where we read data from an API and store it in a blob. We have a For Each activity that loops through a file that has parameters that are used in an API call. We are trying to do this in parallel. The first API call works fine, but the second call returns a 429 error. Are we asking the impossible?
Usually error code 429 meaning is too many requests. Inside ForEach activity, use Sequence execution option and see if that helps.

REST API POST endpoint that doesn't persist created resource

I have a REST API in which I would like to add an endpoint that runs an algorithm and returns a result. Let's assume the algorithm is fast enough that this can be done synchronously. However, the result can be large.
Option A
I treat the result of the algorithm as a "resource". I could implement the following
POST /api/my-result
This creates a new result by running the algorithm. The inputs to the algorithm are in the request body. The response contains the id or some other identifiable representation of the result.
GET /api/my-result//?view=table
This allows the client to get a table representation of the result. Similarly there could be additional views, filters, etc. that could be implemented the same way.
However, this requires me to persist the results in a database. There are two issues: (a) the results can be large, and (b) the client often runs the algorithm several times with different inputs before deciding to "keep" one of the results - so ideally I only want to store the final result in the database.
Option B
POST /api/my-algorithm/
This accepts the parameters of the algorithm in the request body and returns the result in the response body
POST /api/my-result-table-view
This accepts the result in the request body and returns a response that transforms the representation of the result into a table view. The reason it is not GET /api/my-result//?view=table is because the client needs to be able to call this on results that are not persisted. The "table view of the result" is the resource that is created here.
Similarly, I could implement each view of the result as a separate endpoint.
POST /api/my-result
This creates a new result (without running the algorithm). For example, if the result is an image, this POST request may accept the image as a file upload and simply store it. The client calls POST /api/my-algorithm/ repeatedly, and when they are happy with the result, they call this endpoint to create the result.
I believe Option A is the more "RESTful" way, but with the overhead of persisting all results.
Which option do you recommend? Can Option B be implemented differently to make it more "RESTful"? Is there a way I can create a resource without actually persisting it in the database (maybe in a cache)? If you propose the caching route, please include more explanation as I'm not familiar with how that would be implemented.
(If it is relevant, I'm using DRF for implementing the API)

Routing incoming request

I am trying to create a simple API using Go that performs certain operations depending on the data provided.
I was planning to provide JSON data to this API and get details from it for further use.
Since I was trying to provide JSON data I created the routing using gorilla/mux as below:
router.HandleFunc("/msa/dom/perform-factory-reset?json={jsonData}", CallGet)
log.Fatal(http.ListenAndServe(":8080", router))
But while trying to hit the endpoint http://localhost:8080/msa/dom/perform-factory-reset?json={"vrf":"ds","ip":"45","mac":"452","method":"gfd"} I am getting 404 page not found error.
Hence I tried to change the implementation such that new routing is done as:
router.HandleFunc("/msa/dom/perform-factory-reset/json={jsonData}", CallGet)
This works absolutely fine and I am able to perform desired tasks. Could someone tell me why this is happening?
Is the router gorilla/mux? If so, you cannot add query parameters to path like that. You have to:
router.Path("/msa/dom/perform-factory-reset").
Queries("json","{jsonData}").HandlerFunc(CallGet)
If it is some other router, then you still probably have to register path without the query parameters, and then get the query parameter values in the handler from the request.

Model parametrized API Call in Activity Diagram

I have an an activity diagram with two swimlanes (Client and Server). I want to model a request call from Client to Server.
Is it correct to use Signals Notation for Calls between systems? Are there alternatives?
The call is parametrized, Client wants to send something which was created before. How to model this?
Thankful for any hint! Here's my example:
My answer has to be improved, but here is a first step.
The norm/spec says: "A SendSignalAction is an InvocationAction that creates a Signal instance and transmits the instance to the object given on its target InputPin. A SendSignalAction must have argument InputPins corresponding, in order, to each of the (owned and inherited) Properties of the Signal being sent, with the same type, ordering and multiplicity as the corresponding attribute.
And a SendSignalAction has an association to a target objet which is an input pin.
So for your question about Request:item I would use input pin, one for the object from which the Signal is created and one to define the Target. (in the schema the target comes from an output pin but a data store may be use). Then after sending the request, the client is waiting the answer. The AcceptEvent is linked to a trigger (not shown on the schema) which a signal, the one created by the server. But you can not link SendRequest of Client to ReceiveRequest of Server because this is not how it runs.
For the server, you can do similar reasoning.
Concerning the parametrization of the call I would use InputPin to model the arguments of the Call i.e. the Object sent by the Call as shown below.
Signal and Call Notations are correct for me but I am not used to have the sending and receive action in the same diagram so will suggest two alternatives.
1) First remove them...
2) Separate Client and Server Modelling
Let me know what you think about that and what seems to be clear for you...
I also recognize the tool you used so please find my project at:
https://www.dropbox.com/s/s1mx46cb3linop0/Project1.zip?dl=0
As I see it, it should be modeled like this:
The server runs in an independent loop and starts with waiting for a request. There's a object flow between Create request and Query result set. This symbolizes data placed in a queue (or what ever is appropriate). The receipt of the result set would be done below in a similar way, I just left that out for brevity.
You can also draw an object for the query set
instead of the ActionPins.

Using visjs manipulation to create workflow dependencies

We are currently using visjs version 3 to map the dependencies of our custom built workflow engine. This has been WONDERFUL because it helps us to visualize the flow and find invalid or missing dependencies. What we want to do next is simplify the process of building the dependencies using the visjs manipulation feature. The idea would be that we would display a large group of nodes and allow the user to order them correctly. We then want to be able to submit that json structure back to the server for processing.
Would this be possible?
Yes, this is possible.
Vis.js dispatches various events that relate to user interactions with graph (e.g. manipulations, or position changes) for which you can add handlers that modify or store the data on change. If you use DataSets to store nodes and edges in your network, you can always use the DataSets' get() function to retrieve all elements in you handler in JSON format. Then in your handler, just use an ajax request to transmit the JSON to your server to store the entire graph in your DB or by saving the JSON as a file.
The oppposite for loading the graph: simply query the JSON from your server and inject it into the node and edge DataSets' using the set method.
You can also store the networks current options using the network's getOptions method, which returns all applied options as json.