Can Dart have sync APIs? - postgresql

I am thinking about using Dart for server side programming. Because are no full frameworks similar to Ruby on Rails, I am reviewing lower level libraries. The most needed library is Posgresql driver. I found few and most mature seems to be https://pub.dartlang.org/packages/postgresql
And here is my problem. The Postgresql driver has async API. I know that async APIs are required for client side, simply to not block UI thread. However on server side threads are available so why Postgresql driver has async API?
I known about Promises APIs but for me this is just unneeded complexity when doing things on server side. Code readability is most important.
I just wonder if there is something in Dart language design that forces people to build async APIs. So the question is: can Dart have sync APIs for database and file IO operations?

Dart libraries/packages can offer sync APIs. DB ones don't tend to, because you usually don't want to block the entire server waiting for a potentially long DB operation to finish. For example, say you're creating a web server, with a request handler that fetches data from the DB and serves it. If you're only using sync operations, and you get 10 requests, 9 of those will be waiting for the first one to finish before being processed.
If your only concern is readability, you can wait for the await keyword to be implemented, which will help your code feel like sync code, while actually working async:
var conn = await connect(uri);
var results = await conn.query('select * from users').toList();
for(result in results) {
print("${result.username} has email address ${result.emailAddress}.");
}

Dart does have synchronous file io operations. For example see File.readAsStringSync().
Dart does not have a built in way to perform blocking io on a socket, which means that the postgresql library must be asynchronous.
Dart, even on the server, doesn't really have "threads". It has isolates which can run in parallel, but can only communicate by message passing, and do not have synchronisation primitives like mutexes.
Here is a list of server frameworks.
When writing asynchronous code in Dart, the Future.then() syntax, especially with multiple levels of nesting, can become tedious. Luckily the async/await feature is being implemented which means you can write code that is asynchronous but reads similar to code written with blocking io. For example:
main() async {
var conn = await connect('postgresql://foo');
try {
var sql = "select 'foo'";
var result = await conn.query(sql).toList();
return result[0];
} on Exception catch (ex) {
print('Query error: $ex');
} finally {
conn.close();
}
}
As of Nov14, to use async await you need to run dart with the --enable_async flag, and be prepared for crashes, and missing stack trace information - this feature is under active development and not yet stable. Another more stable option is to use the async_await package to translate your async/await code to use Futures.
If you have any specific questions about the postgresql driver (I am the author), feel free to open an issue against it in github, or to send me an email. My email address is on the postgresql pub page.

You need concurrent programming on the server too, not only on the client. It would be highly inefficient for the server to process requests from clients one after another - waiting until one request is totally completed before starting to process the next request while the Dart server process itself is waiting for the operating system, the database or the network to complete calls made to it.
When an I/O operation which are usually async is called, Dart can start processing other requests while it waits for invoked I/O operations to complete.
You can improve concurrent programming by using isolates but you can't create a Dart application that does some kind of I/O calls with sync calls only.

Dart doesn't support multithreading, if not in the form of Dart isolates (but those are not production ready). Only asynchronous processing is well supported, and the "await" keyword (the best syntax for coroutines) is being added to Dart in the next few months. If you need to build a small web site it will work great.
But if you need a really scalable solution for a big web site or a demanding web app, I suggest you to use the combination of Dart+Go. Make Dart to manage the client/UI side, and Golang to manage the data providers on server side. Go is the most performant language server side thanks to his innovative "Goroutines". Goroutines are a form or coroutines that runs automatically in multiple threads, combining the flexibility of asynchronous processing with the efficiency of synchronous multithreading. Goroutines are automatically multiplexed onto multiple OS threads so if one should block, such as while waiting for I/O, others continue to run.
Go+Dart makes a really powerful combination.

Related

Is it possible to open two websockets connection using SignalR package in Dart?

I have been working with SignalR package for a while, so now I have a task which requires me to create two websockets connection. One channel is for taking about 50 photos (it comes from server having about 2MB size for each, which is pretty big), the second is for getting time, user's auth, keep user's data, so my question is:
How can I open two websockets connection concurrently?
I have read that the websocket works synchronously and it is impossible to make it asynchronous (I guess?...), so may be, someone have had the same issue and solved it?
Now with SignalR package I need to start my websockets connection every time I want to use request to the server. For example, now it looks like this:
The first request on page
Future<String> getTime()async{
//some code for building url which I took from from SignalR official docs
await connection.start()
hubconnection.invoke('getTime');
}
the second request on this page
Future<String> getUsersData()async{
//some code for building url which I took from from SignalR official docs
await connection.start()
hubconnection.invoke('getUsersData');
}
So I am not sure about this line: await connection.start() because I think this means that every time I go to a page where there can be up to five such requests, the websocket starts every time, which greatly affects the performance of the app. Is it possible to make this line shared between all requests once or is there some way to improve the work?
So after this research I was thinking may be I can open two websockets request to make workload less?

Flutter: why ever use a Future over a Stream?

If a Future displays a once off piece of data while a Stream offers the additional advantage of updating information in real-time when data is modified at the source (ie, Firestore database), then why would one ever use a Future? What disadvantages would using a Stream over a Future have?
Why would one ever use a Future?
A Future handles a single event when an asynchronous operation completes or fails. It can be used for simple HTTP requests (GET, POST, ...).
You can have a look to the boring flutter development show where Google engineers build a simple Hacker News app, with Futures.
EDIT
New video from Flutter team about dart Futures
What disadvantages would using a Stream over a Future have?
They are made for different needs : real time updates or asynchronous calls, so they cannot really be compared in terms of advantages of one over the other.

What should be returned from the API for CQRS commands?

As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.

Non-blocking / Asynchronous Execution in Perl

Is there a way to implement non-blocking / asynchronous execution (without fork()'ing) in Perl?
I used to be a Python developer for many years... Python has really great 'Twisted' framework that allows to do so (using DEFERREDs. When I ran search to see if there is anything in Perl to do the same, I came across POE framework - which seemed "close" enough to what I was searching for. But... after spending some time reading the documentation and "playing" with the code, I came against "the wall" - which is following limitation (from POE::Session documentation):
Callbacks are not preemptive. As long as one is running, no others will be dispatched. This is known as cooperative multitasking. Each session must cooperate by returning to the central dispatching kernel.
This limitation essentially defeats the purpose of asynchronous/parallel/non-blocking execution - by restricting to only one callback (block of code) executing at any given moment. No other callback can start running while another is already running!
So... is there any way in Perl to implement multi-tasking (parallel, non-blocking, asynchronous execution of code) without fork()'ing - similar to DEFERREDs in Python?
Coro is a mix between POE and threads. From reading its CPAN documentation, I think that IO::Async does real asynchronous execution. threads can be used too - at least Padre IDE successfully uses them.
I'm not very familiar with Twisted or POE, but basic parallel execution is pretty simple with threads. Interpreters are generally not compiled with threading support, so you would need to check for that. The forks package is a drop-in replacement for threading (implements the full API) but using processes seamlessly. Then you can do stuff like this:
my $thread = async {
print "you can pass a block of code as an arg unlike Python :p";
return some_func();
};
my $result = $thread->join();
I've definitely implemented callbacks from an event loop in an async process using forks and I don't see why it wouldn't work with threads.
Twisted also uses cooperative multi-tasking just like POE & Coro.
However it looks like Twisted Deferred does (or can) make use of threads. NB. See this answer from the SO question Twisted: Making code non-blocking
So you would need to go the same route with POE (though using fork is probably preferable).
So one POE solution would be to use: POE::Wheel::Run - portably run blocking code and programs in subprocesses.
For alternatives to POE take a look at AnyEvent and Reflex.
I believe you use select for that kind of thing. More similarly to forking, there's threading.
POE is fine if you want asynchronous processing but using only a single cpu (core) is fine.
For example if the app is I/O limited a single process will be enough most of the time.
No other callback can start running while another is already running!
As far as I can tell - this is the same with all languages (per CPU thread of course; modern web servers usually spawn a least one process or thread per CPU core, so it will look (to users) like stuff it working in parallel, but the long-running callback didn't get interrupted, some other core just did that work).
You can't interrupt an interrupt, unless the interrupted interrupt has been programmed specifically to accommodate it.
Imagine code that takes 1min to run, and a PC with 16 cores - now imagine a million people try to load that page, you can deliver working results to 16 people, and "time out" all the rest, or, you can crash your web server and give no results to anyone. Folks choose not to crash their web server, which is why they never permit callbacks to interrupt other callbacks (not that they could even if they tried - the caller never gets control back to make a new call before the prior one has ended anyhow...)

Can I use async controllers in the following scenario?

I have an application in Asp.net MVC where at some point I would like to display a modal dialog to the user that would display process execution progress indicator.
The process behind the scenes does a lot of database data processing (based on existing data it generates lots of resulting records that get written back to database as well). Process may take anything from a brief moment to a very long time (depending on existing data).
Application will initiate this process asynchronously (via Ajax request) and display progress in the same manner.
The problem
I've read a bit about Async controllers where one can asynchronously start a process and will informed about the end of it but there's no progress indication and I'm not really sure how browser timeouts are handled. As far as the client goes an async request is the same as synchronous one. Client will therefore wait for response (as I understand it). the main difference being that server will execute something in async manner so it won't block other incoming requests. What I should actually do is:
make a request that would start the process and respond to the client taht process has started.
client would them periodically poll the server for process progress status getting immediate response back with percentage value (probably as JSON)
when progress would be 100% it would mean that it ended so client would know to make a request for results.
I'm not convinced that async controllers work this way...
The thing is that I'm not really sure I understand async controllers hence am not sure which approach should I use approach this problem as just described? I see two possibilities myself:
Asp.net MVC Async controllers if they can work this way
Windows Service app that processes data on request and reports its progress - this service would be started by writing a particular record to DB using a normal controller action; that would start it and then service would be writing its progress status to DB so my Asp.net MVC app would be able to read it on client process polling requests.
I haven't used Asynch controllers myself in a project. However here's a link to someone who has.
asynchronous-processing-in-asp-net-mvc-with-ajax-progress-bar
I have personally used Number 2 in a large production project.
Number 2 was a Service App running on a separate server using OpenSSH to communicate between the two servers. We'd poll for progress periodically to update the progress bar to the clients UI via AJAX.
Additionally by separating your web server from your long running process you are separating your concerns. You web server is not interested in writing files to disk, handling IO, etc and so shouldn't be burdended with such.
If your long running process has to be killed or fails then this wont affect your web server handling requests, and processing transactions.
Another suggestion would be for an extremely long running process is not to burden the client with waiting, give them an option to come back later to see the progress. I.e. send them an e-mail when its done.
Or actually show them something interesting, in our case we had a signed Java Applet show exactly what their process is doing at that exact moment.