Calling 2 different ARest-APIs in single get Request - rest

I have one ARest service.
which does have 2 Resources.
HostName: CustomerAdminSystem.com
Operation: Get
First-ResourceName: AdminData
Second-ResourceName: CustomerData.
Here, Resource-1 & 2 both are totally mutually exclusive and we can definitely call them parallel & get the data.
what is the way to get such data in a single call.
I am aware of the term: Composite-APIs concept, which basically enables to call 2 different API of single service in one go.
Ref: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_composite_composite.htm
but as mentioned in this page:
Executes a series of REST API requests in a single call. You can use enter code here the output of one request as the input to a subsequent request.
but we don't want to keep second call on hold until first finishes.
How can we really invoke 2 different resources in a parallel way.
any help would be appreciated.
Thanks.

Related

Google Data Fusion: "Looping" over input data to then execute multiple Restful API calls per input row

I have the following challenge I would like to solve preferably in Google Data Fusion:
I have one web service that returns about 30-50 elements describing an invoice in a JSON payload like this:
{
"invoice-services": [
{
"serviceId": "[some-20-digit-string]",
// some other stuff omitted
},
[...]
]
}
For each occurrence of serviceId I then need to call another webservice https://example.com/api/v2/services/{serviceId}/items repeatedly where each serviceId comes from the first call. I am only interested in the data from the second call which is to be persisted into BigQuery. This second service call doesn't support wildcards or any other mechanism to aggregate the items - i.e. if I have 30 serviceId from the first call, I need to call the second webservice 30 times.
I have made the first call work, I have made the second call work with a hard coded serviceId and also the persistence into BigQuery. These calls simply use the Data Fusion HTTP adapter.
However, how can I use the output of the first service in such a way that I issue one webservice call for the second service for each row returned from the first call - effectively looping over all serviceId?
I completely appreciate this is very easy in Python Code, but for maintainability and fit with our environment I would prefer to solve this in Data Fusion or need be any of the other -as-a-Service offerings from Google.
Any help is really appreciated!
J
PS: This is NOT a big data problem -I am looking at about 50 serviceId and maybe 300 items.

How to define 2 different endpoints for variable path parameter and specific path parameter in swagger

I want to define 2 rest endpoints:
POST on /a/{id}/c
POST on /a/b/c
b here being a specific keyword.
The reason I need separate endpoints is payloads are different for both
The problem is when I do a POST on /a/b/c, the request is going to the first and due to mismatch of payloads, failing
Can this be done in swagger?
Is there better way to do this?
Based on your comment, you could create two new POST endpoints and separate their functionality a little. Separating the functionality in the endpoint itself also makes it easier to read and to work with, as it immediately states what the endpoint is there for. Whether it be for a single user, or a whole batch of users, but still performing the same action. For example:
Adding a role to a specific user.
POST /system/roles/user/{userid}
Adding roles to a batch of users at once.
POST /system/roles/batch
Would that work for you?

Azure function REST API handling GET POST

I'm following some Azure Function tutorials on creating a REST API, this all seems straight forward. However I'm struggling to understand how I should deal with the different verbs.
Is the recommended practice to create a separate Azure Function for each verb? And then also separate functions for each variation of routing for each verb, so a e.g. a separete function for each of:
products/{productid} (GET)
products (GET, returns list)
products/me (GET returns a list of products belonging to the user making the request)
It seems to me I'm going to end up with a lot of Azure functions here. In the WebAPI approach I would have put all these in a single controller and the attribute routing would have taken care of the rest.
Is there another way to achieve this with Azure function?
You can use Azure Function Proxies to setup routing for HTTP verbs and parameters and then pass the call down to a single function. Creating a function per each verb/parameter combination seems to be an overkill.
Of course, if processing logic is completely different e.g. for GET vs POST, it makes sense to put those into separate functions. So, in the end it's your call, but you have tools for both scenarios.

Single v Multiple REST endpoint

Were working on making legacy code (Java) accessible via a REST API (Java/Spring/JSON).
Essentially our legacy code has numerous command processor type classes (which take in commands) and serve data back to the caller. So we have numerous command processors with numerous methods in them. Each method is analogous to a GET/POST of data, i.e. getCustomer / getCustomers / addCustomer etc..
We discussed 2 options:
Option 1 - Create endpoints for every operation.
Option 2 - Create one single REST endpoint and pass in a generic payload. The JSON being passed in would have a "type" identifier which in the endpoint we could then "construct" the require object type
I think option #1 is a better design as it's much simpler and adheres more to REST. I don't like option #2 as the single endpoint is now actually like a fancy front controller or dispatcher and would then either contain a huge switch statement.
I'd be interested in seeing what you guys think, pros / cons.

How to pass a large number of input parameters to RESTful service?

I have a RESTful service that returns detailed data about a machine by the supplied list of Ids. GET api/machine/
http://service.com/api/machine/1,2,3,4
Up till now this has been fine since I am getting a small number of machines at a time, but now I need to get all machines (more then 1000). This exceeds the 2000 character limit on URLs.
I have gotten both of the options below to work and I'm looking for some community feedback on which way to go.
Option 1: Split up my GET. Make multiple calls with a subset of the ids. Pros: I am doing a get so using the HTTP verb GET makes sense. Cons: If a person new to the service doesn't know about this limit, or doesn't use my client, it would cause problems.
Option 2: Add a PUT/POST method and include the full list of ids in the body. Pros: Makes 1 call to get all data. Cons: I am now doing a get from a PUT/POST.
Probably your best course-of-action would be something in the lines of option 2, you can create a JSON on your side with an array of the numbers you want to send in the Body of the message. If there's the possibility of it still being far too large, you can split it in several messages, when you receive the response of one you'd send the next item in the queue, and so on.
Another option, used by the Facebook API among others, is to create a "/batch" POST method which can be used to make multiple requests in one go.
So instead of having http://service.com/api/machine/1,2,3,4,5,.... you'll have a batch of requests with /machine/1, /machine/2, /machine/3, etc.
The advantage is that you keep clean RESTful URLs (no more coma-separated values) and it scales very well since you can batch as many requests as you want.
The disadvantage is that it is slightly more complex to build.
See there for more information - https://developers.facebook.com/docs/graph-api/making-multiple-requests