dlang vibe.d RESTful Service Performance - rest

Thank you for your assistance.
Question:
Why does my REST service seem to perform so poorly using rest interfaces in dlang vibe.d when compared to creating request handlers manually?
More Information:
I have been prototyping a RESTful service using the vibe.d library in dlang. I'm running a test where a client sends GET and POST requests to the server with a payload of some given size, say 2048 byte (i.e. the GET response would have 2k, the POST request would have 2k).
I'm using the "registerRestInterface" and "RestInterfaceClient" API in the vibe.d library to create my server and client sort of like this...
Server:
auto routes = new URLRouter;
registerRestInterface(routes, new ArtifactArchive());
auto settings = new HTTPServerSettings();
settings.port = port;
settings.bindAddresses = [host];
settings.options |= HTTPServerOption.distribute;
listenHTTP(settings, routes);
runEventLoop();
Client:
IArtifactArchive archive = new RestInterfaceClient!IArtifactArchive(endpoint)
IArtifactArchive.Payload result;
result = archive.getContents(info.FileDescriptor, offset, info.BlockSize);
I'm not doing anything fancy in my interface. Just filling a byte array and passing it along. I know performance depends on many different things; however I seem to see about 160kB transfer rate when using REST interfaces in vibe.d and roughly 5MB transfer rate when using manual http request handlers like this:
void ManualHandleRequest(HTTPServerRequest req, HTTPServerResponse res) ...
listenHTTP(settings, &ManualHandleRequest);
I really like the REST interface API, but I can't suffer that kind of performance loss in order to use it. Any thoughts on why it seems so much slower than the other method? Perhaps I'm configuring something wrong or missing something. I am somewhat new to the D programming language and the vibe.d library.
Thank you for your time!

I suspect that with custom request handler you actually write response as a byte array. REST interface generator serializes all return data into JSON by default which creates huge overhead compared to raw array.
This is just a random guess though, I need to see actual REST method implementation to say for sure and/or propose solution.

Related

REST API allow update of resource depending on state of resource

I have recently read the guide on implementing RESTful API's in Spring Boot from the official Spring.io tutorials website (link to tutorial: https://spring.io/guides/tutorials/rest/)
However, something in the guide seemed to contradict my understanding of how REST API's should be built. I am now wondering if my understanding is wrong or if the guide is not of as high a quality as I expected it to be.
My problem is with this implementation of a PUT method to update the status of an order:
#PutMapping("/orders/{id}/complete")
ResponseEntity<?> complete(#PathVariable Long id) {
Order order = orderRepository.findById(id) //
.orElseThrow(() -> new OrderNotFoundException(id));
if (order.getStatus() == Status.IN_PROGRESS) {
order.setStatus(Status.COMPLETED);
return ResponseEntity.ok(assembler.toModel(orderRepository.save(order)));
}
return ResponseEntity //
.status(HttpStatus.METHOD_NOT_ALLOWED) //
.header(HttpHeaders.CONTENT_TYPE, MediaTypes.HTTP_PROBLEM_DETAILS_JSON_VALUE) //
.body(Problem.create() //
.withTitle("Method not allowed") //
.withDetail("You can't complete an order that is in the " + order.getStatus() + " status"));
}
From what I read at https://restfulapi.net/rest-put-vs-post/ a PUT method should be idempotent; meaning that you should be able to call it multiple times in a row without it causing problems. However, in this implementation only the first PUT request would have an effect and all further PUT requests to the same resource would result in an error message.
Is this okay according to RESTful API's? If not, what would be a better method to use? I don't think POST would be any better.
Also, in the same guide, they use the DELETE method in a similar way to change the status of an order to cancelled:
#DeleteMapping("/orders/{id}/cancel")
ResponseEntity<?> cancel(#PathVariable Long id) {
Order order = orderRepository.findById(id) //
.orElseThrow(() -> new OrderNotFoundException(id));
if (order.getStatus() == Status.IN_PROGRESS) {
order.setStatus(Status.CANCELLED);
return ResponseEntity.ok(assembler.toModel(orderRepository.save(order)));
}
return ResponseEntity //
.status(HttpStatus.METHOD_NOT_ALLOWED) //
.header(HttpHeaders.CONTENT_TYPE, MediaTypes.HTTP_PROBLEM_DETAILS_JSON_VALUE) //
.body(Problem.create() //
.withTitle("Method not allowed") //
.withDetail("You can't cancel an order that is in the " + order.getStatus() + " status"));
}
This looks very wrong to me. We are not deleting anything here, it is basically the same as the previous PUT method just with a different state we want to move to. Am I correct to assume that this part of the tutorial is bogus?
TL;DR: what HTTP method is right to use when you want to advance the status of a resource to the next stage without giving an option of going back to an earlier stage? Basically an update/patch that will invalidate its own pre-conditions.
something in the guide seemed to contradict my understanding of how REST API's should be built. I am now wondering if my understanding is wrong or if the guide is not of as high a quality as I expected it to be.
I wouldn't consider this guide to be a reliable authority - the described resource model has some very questionable choices.
From what I read at https://restfulapi.net/rest-put-vs-post/ a PUT method should be idempotent; meaning that you should be able to call it multiple times in a row without it causing problems. However, in this implementation only the first PUT request would have an effect and all further PUT requests to the same resource would result in an error message.
The authoritative definition of idempotent semantics in HTTP is currently RFC 7231.
A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
Note: "effect", not "response".
PUT /orders/12345/complete
means "please replace the current representation of /orders/12345/complete with the representation in the payload". In other words "save this file on top of your current copy". Saving the same file two or three times in row produces the same effect as saving the file once, so that's "idempotent".
HTTP does not define exactly how a PUT method affects the state of an origin server beyond what can be expressed by the intent of the user agent request and the semantics of the origin server response. It does not define what a resource might be, in any sense of that word, beyond the interface provided via HTTP. It does not define how resource state is "stored", nor how such storage might change as a result of a change in resource state, nor how the origin server translates resource state into representations. Generally speaking, all implementation details behind the resource interface are intentionally hidden by the server. -- RFC 7231
So in their CURL example
PUT /orders/4/complete HTTP/1.1
Host: localhost:8080
User-Agent: curl/7.54.0
Accept: */*
The meaning of this message is "replace the current representation of /orders/4/complete with an empty representation". But the origin server gets to choose how to do that, and which standardized responses to return to the client.
So this is fine.
All work is transacted by politely placing documents in in-trays, and then some side effect of placing that document in an in-tray causes some business activity to occur -- Jim Webber, 2011.
In this case, the document we are putting into the "in-tray" happens to be blank.
#DeleteMapping("/orders/{id}/cancel")
I would never approve that choice in a code review. DELETE (like PUT) has semantics in the "transfer of documents over a network domain".
The DELETE method requests that the origin server remove the association between the target resource and its current functionality. In effect, this method is similar to the rm command in UNIX: it expresses a deletion operation on the URI mapping of the origin server rather than an expectation that the previously associated information be deleted.
Trying to hijack the method because the spelling is kind of like the domain action is the wrong heuristic to use in choosing methods.
Relatively few resources allow the DELETE method -- its primary use is for remote authoring environments, where the user has some direction regarding its effect.
The point being that we have a general purpose document manipulation interface, and we are using that interface as a facade that allows us to drive business activity. So we should be using our standardized message semantics the same way every other page on the web does.
#PutMapping would be defensible, using the same justification as we did for /complete.
what HTTP method is right to use when you want to advance the status of a resource to the next stage without giving an option of going back to an earlier stage? Basically an update/patch that will invalidate its own pre-conditions.
PUT, PATCH, and POST are all appropriate methods to use when editing the representation of a resource. Use PUT or PATCH when you are sending a replacement representation for the resource, use POST when you are asking the server to calculate what the edit to the representation should be.

What's the use case for gRPC where it could definitely overcome REST

I made up simple benchmarking for the simpliest case: sending string Hello world over gRPC and rest in ruby:
# REST example
require 'sinatra'
set :bind, '0.0.0.0'
set :logging, false
get '/' do
'Hello, world!'
end
gRPC example is based on official examples
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
class GreeterServer < Helloworld::Greeter::Service
def say_hello(hello_req, _unused_call)
Helloworld::HelloReply.new(message: "Hello #{hello_req.name}")
end
end
deployed this code to remote server and run 1000 requests benchmark (ab for rest and looping client requests for gRPC) and get comparable results 51 sec vs 53 (REST vs gRPS)
so, I made up conclusion that in that case (with small amount of data in response) there is no benefits to gRPC. So, when would they appear? When data size would be magnitude of kilobytes or even megabytes? Or there are essentially different use cases for gRPC like streaming and duplexing data between server and client?
This blog post indicates that gRPC performs better while being slightly harder to use.
I'm think the improved performance comes from using protocol buffers for data transmission; I believe that means data is transmitted in binary format which would mean improved performance when you have more data, particularly non-string data.

Using http response headers in order to communicate server-side errors from the backend to the front-end

I am working on a REST backend consumed by a javascript/ajax front-end.
I am trying to find a way to deal with invalid requests sent over by the front-end to the backend.
One of the issues I have is that HTTP status codes such as (400, 409) are not fine-grained enough to cover business logic errors such as passwords not matching (in the case of a user changing his password) or an email being unknown to the system (in the case of a user trying to signin with the application).
I am thinking of using HTTP response headers in order to communicate server-side error from the backend to the front-end.
I could for instance have an Error enum (or a class with constants) as follows:
public enum Error {
UNKNOWN_EMAIL,
PASSWORDS_DONT_MATCH,
//etc.
}
I would then use that enum in order to set the headers on the response as follows:
response.setHeader(Error.UNKNOWN_EMAIL.name(), "true");
... and deal with the error appropriately on the front-end.
Can the above architecture be improved? If so how?
Is my usage of HTTP response headers correct?
Should I use constants or enums?
Is my usage of HTTP response headers correct?
I do not think it is incorrect, however I prefer to send an error message/code directly back in the response body. This is usually more convenient for the client to access and is more explicit. As part of consuming each response, the client can check the contents of the errors (you may have multiple) and act accordingly. The following is a little contrived just to provide an example:
// ...
{
"errors": {
"username": "not found"
"password": "no match"
}
"warnings": {
"account": "expired"
}
}
// ...
The above is quite a simple approach - your JSON message can be as sophisticated as you wish but keep in mind that you should only expose the information the client needs for it to achieve its goal. This will also depend on whether you are publishing an API for 3rd parties/public consumption or whether its just for your own clients ie. your own website. If you have other parties consuming it then put some thought into it since once you publish it then you need to maintain it that way - otherwise you break any consumers.
Check out JSON API for some standardized guidance on handling errors.
Should I use constants or enums?
Since these are a related set of properties an enum is preferable over constants (I assume you are using Java).

How to Implement an Infrastructure for Automed IVR calls?

I need tips to build an infrastructe to send 1000 simultaneous voice calls (automated IVR calls with voicexml). Up to now i used asterisk with voiceglue but now i have performance issues.
The infrasturcture was like this:
the asterisk pulls request from queue
the queue consumer create a call file
when the call ends, call file is read and status is sent to the application server
To be honest, i am asking for tips to implement an infrastructure like callfire[1] or voxeo[2]?
[1]https://www.callfire.com/
[2]http://voxeo.com/
you can go with voxeo prophecy (http://voxeo.com/prophecy/) one of the good server which have the capability of making simultaneous voice calls
Note: The requirement which your are expecting to do will not only possible with voxeo prophecy it should also depend the web server like Tomcat, IIS e.t.c in case if you dealing with databases like Sql , Oracle e.t.c
Please do refer to know the architecture
http://www.alpensoftware.com/define_VoiceOverview.html
CallFire's API has a CreateBroadcast method where you can throw up an IVR using their XML in seconds. You can read up on the documentation here:
https://www.callfire.com/api-documentation/rest/version/1.1#!/broadcast
CallFire also offers a PHP-SDK, hosted on Github, with examples of how to do this. The SDK is minimal setup and allows you to easily tap into the APIs robust functionality. Version 1.1 can be found here, with instructions on how to get started: https://github.com/CallFire/CallFire-PHP-SDK
The method call might look something like this. Note the required dependencies.
<?php
use CallFire\Api\Rest\Request;
use CallFire\Api\Rest\Response;
require 'vendor/autoload.php';
$dialplan = <<<DIALPLAN
<dialplan><play type="tts">Congratulations! You have successfully configured a CallFire I V R.</play></dialplan>
DIALPLAN;
$client = CallFire\Api\Client::Rest("<api-login>", "<api-password>", "Broadcast");
$request = new Request\CreateBroadcast;
$request->setName('My CallFire Broadcast');
$request->setType('IVR');
$request->setFrom('15551231234'); // A valid Caller ID number
$request->setDialplanXml($dialplan);
$response = $client->CreateBroadcast($request);
$result = $client::response($response);
if($result instanceof Response\ResourceReference) {
// Success
}
You can read this:
http://www.voip-info.org/wiki/view/Asterisk+auto-dial+out
Main tip: you WILL have ALOT of issues. If you are not expert with at least 5 years development experience with asterisk, you have use already developed dialling cores or hire guru. There are no opensource core that can do more then 300 calls on single server.
You can't do 1000 calls on single asterisk in app developed by "just nice developer". It will just not work.
Task of create dialling core for 1000 calls is "rocket science" type task. It require very special dialling core, very special server/server tunning and very specialized dialler with pre-planning.
1000 calls will result 23Mbit to 80Mbit bandwidth usage with SMALL packets, even that single fact can result you be banned on your hosting and require linux network stack be tunned.
You can use ICTBroadcast REST API to integerate your application with reknown autodialer , please visit following link for more detail
http://www.ictbroadcast.com/news/using-rest-api-integerate-ictbroadcast--third-party-application-autodialer
ICTBroadcast is based on asterisk communication engine
I've already done this for phone validation and for phone message broadcasting using Asterisk and Freeswitch. I would go with Freeswitch and xmlrpc:
https://wiki.freeswitch.org/wiki/Freeswitch_XML-RPC

Sending a response without calling render() from a Mojolicious::Lite application

I am writing a "partial proxy" in Mojolicious::Lite. Certain requests (depending on the query path, and on the values of the parameters) generate a request to another server, while others are handled locally.
There is a nice proxy example, but it totally overrides the request/response handling and thus is not suitable to my needs.
Currently, I am marshalling the response via
$self->render(data => $res->body, code => $res->code);
Unfortunately, this does not take into account different content types. Using Mojolicious::Type does not help,
because I need a reverse mapping from the content type
in $res to the format in render(); besides,
the number of possible render formats is significantly smaller
than the number of possible content types.
So ideally, instead of the $self->render() call above
I need a way to say "here, I got a response in $res;
please return it back to the client as is".
Any ideas? Thanks!
Ok, the trick was to replace render() call with
$self->tx->res($res);
$self->rendered($res->code);