I'm writing a client library for a custom TCP-based protocol. I'd like the library to easily permit users to make requests on multiple threads, which is the reason for my question. My library is intended to replace a slow, Python-based implementation which I've identified as a bottleneck in our pipeline, so request throughput is crucial here. So far, I've used tokio's networking facilities so that, once I've got concurrent sockets, I'll be able to execute requests in parallel. After the application opens a socket, the protocol allows it to re-use the connection for subsequent requests.
When a session begins, the applications sends an initial request and receives an authorization token in return, which it must include with future requests. Requests are frequent and small but irregular in their contents, so I'd like to pool my TCP sockets, like (I imagine) a web browser does to support the keep-alive mechanism. I very much want to avoid having each request spin up a new socket -- even though it could re-use the existing authorization token -- because of the delays associated with the TCP's three-way handshake and slow start.
The classic solution to this problem is a connection pool, like you'd use in a web application to connect to a database. The trouble is, I haven't been able to find a good connection pool for sockets, and I'd prefer to avoid introducing the complexity of one I write myself. There's r2d2 (see here), for instance, but that only supports database connections. Meanwhile, tk_pool (here) hasn't been updated in two years, which is not encouraging.
This feels like a common task, so I'm surprised I haven't yet found a simple way to do this. I'm new to Rust's async/await features and tokio, so I may well be missing something essential. Here's the question, simply:
How can I distribute many bits of IO across several sockets, each connected to the same host? Put another way, how can I have a pool of workers take temporary ownership of (or gain a mutable reference to) the first available of a set of equivalent resources?
I'm open to all manner of suggestions, but to avoid making this question opinion based, I think the central question is one of fact: What is the idiomatic, async Rust way to do connection pooling?
Thanks!
Here's some pseudo-code that outlines how I'm imagining my code would look, once I've identified the right way to do this:
struct Session {
pool: ConnectionPool<tokio::net::TcpStream>,
// authorization token, etc.
}
impl Session {
async fn make_request(&mut self, parameters...) -> Result<Response, Error> {
let sock = self.pool.borrow_socket(); // probably requires &mut self. Won't be able to distribute the Session object if it requires mutable references. Will I need to use Cell?
sock.write(format!("request: {}", parameters)).await?;
let results = sock.read().await?;
Ok(parse(results)?)
// sock is dropped, which returns it to the pool; alternatively, maybe you've got to call, e.g., sock.release().
}
}
Related
I have a service, which is exposed over HTTP. Most of traffic input gets into it via single HTTP GET endpoint, in which the payload is serialized and encrypted (RSA). The client system have common code, which ensures that the serialization and deserialization will succeed. One of the encoded parameters is the operation type, in my service there is a huge switch (almost 100 cases) that checks which operation is performed and executes the proper code.
case OPERATION_1: {
operation = new Operation1Class(basicRequestData, serviceInjected);
break;
}
case OPERATION_2: {
operation = new Operation2Class(basicRequestData, anotherServiceInjected);
break;
}
The endpoints have a few types, some of them are typical resource endpoints (GET_something, UPDATE_something), some of them are method based (VALIDATE_something, CHECK_something).
I am thinking about refactoring the API of the service so that it is more RESTful, especially in the resource-based part of the system. To do so I would probably split the endpoint into the proper endpoints (e.g. /resource/{id}/subresource) or RPC-like endpoints (/validateSomething). I feel it would be better, however I cannot make up any argument for this.
The question is: what are the advantages of the refactored solution, and what follows: what are the disadvantages of the current solution?
The current solution separates client from server, it's scalable (adding new endpoint requires adding new operation type in the common code) and quite clear, two clients use it in two different programming languages. I know that the API is marked as 0-maturity in the Richardson's model, however I cannot make up a reason why I should change it into level 3 (or at least level 2 - resources and methods).
Most of traffic input gets into it via single HTTP GET endpoint, in which the payload is serialized and encrypted (RSA)
This is potentially a problem here, because the HTTP specification is quite clear that GET requests with a payload are out of bounds.
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
It's probably worth taking some time to review this, because it seems that your existing implementation works, so what's the problem?
The problem here is interop - can processes controlled by other people communicate successfully with the processes that you control? The HTTP standard gives us shared semantics for our "self descriptive messages"; when you violate that standard, you lose interop with things that you don't directly control.
And that in turn means that you can't freely leverage the wide array of solutions that we already have in support of HTTP, because you've introduce this inconsistency in your case.
The appropriate HTTP method to use for what you are currently doing? POST
REST (aka Richardson Level 3) is the architectural style of the world wide web.
Your "everything is a message to a single resource" approach gives up many of the advantages that made the world wide web catastrophically successful.
The most obvious of these is caching. "Web scale" is possible in part because the standardized caching support greatly reduces the number of round trips we need to make. However, the grain of caching in HTTP is the resource -- everything keys off of the target-uri of a request. Thus, by having all information shared via a single target-uri, you lose fine grain caching control.
You also lose safe request semantics - with every message buried in a single method type, general purpose components can't distinguish between "effectively read only" messages and messages that request that the origin server modify its own resources. This in turn means that you lose pre-fetching, and automatic retry of safe requests when the network is unstable.
In all, you've taken a rather intelligent application protocol and crippled it, leaving yourself with a transport protocol.
That's not necessarily the wrong choice for your circumstances - SOAP is a thing, after all, and again, your service does seem to be working as is? which implies that you don't currently need the capabilities that you've given up.
It would make me a little bit suspicious, in the sense that if you don't need these things, why are you using HTTP rather than some messaging protocol?
Let's assume I have a REST API for a bulletin board with threads and their comments as a subresource, e.g.
/threads
/threads/{threadId}/comments
/threads/{threadId}/comments/{commentId}
The user can retrieve all threads with /threads, but what is an efficient/good way to retrieve all comments?
I know that HAL can embeded subresources directly into a parent resource, but that possibly means sending much data over the network, even if the client does not need the subresource. Also, I guess paging is difficult to implement (let's say one thread contains many hundred posts).
Should there be a different endpoint representing the SQL query where threadId in (..., ..., ...)? I'm having a hard time to name this endpoint in the strict resource oriented fashion.
Or should I just let the client retrieve each subresource individually? I guess this boils down to the N+1 problem. But maybe it's not so much of a deal, as they client could start to retrieve all subresources at once, and the responses should come back simulataneously? I could think of the drawback that this more or less forces the API client to use non-blocking IO (as otherwise the client may need to open 20 threads for a page size of 20 - or even more), which might not be so straight-forward in some frameworks. Also, with HTTP 1.1, only 6 simulatenous requests are allowed per TCP connection, right?
I actually now tend to the last option, with a focus on HTTP 2 and non-blocking IO (or even server push?) - although some more simpler clients may not support this. At least the API would be clean and does not have to be changed just to work around technical difficulties.
Is there any other option I have missed?
Trying to do some simple things with mongodb my mind got stuck in something that feels kinda strange for me.
client = MongoClient(connection_string)
db = client.database
print(db)
client.close()
I thought that when make a connection it is used only this one along the rest of the code until the close() method. But it doesn't seem to work that way... I don't know how I ended up having 9 connections when it supposed to be a single one, and even if each 'request' is a connection there's too many of them
For now it's not a big problem, just bothers me the fact that I don't know exactly how this works!
When you do new MongoClient(), you are not establishing just one connection. In fact you are creating the client, that will have a connection pool. When you do one or multiple requests, the driver uses an available connection from the pool. When the use is complete, the connection goes back to the pool.
Calling MongoClient constructor every time you need to talk to the db is a very bad practice and will incur a penalty for the handshake. Use dependency injection or singleton to have MongoClient.
According to the documentation, you should create one client per process.
Your code seems to be the correct way if it is a single thread process. If you don't need any more connections to the server, you can limit the pool size by explicitly specifying the number:
client = MongoClient(host, port, maxPoolSize=<num>).
On the other hand, if the code might later use the same connection, it is better to simply create the client once in the beginning, and use it across the code.
I am creating a custom system that, when a user submits a netconf edit-config, it will initiate a set of actions in my system that will atomically alter the configuration of our system and then submit a notification to the user of its success or failure.
Think of it as a big SQL transaction that, at the end, either commits or rolls back.
So, steps
User submits an edit-config
System accepts config and works to implement this config
If the config is successful, sends by a thumbs up response (not sure the formal way of doing this)
If the config is a failure, sends by a thumbs down response (and I will have to make sure the config is rolled back internally)
All this is done atomically. So, if a user submits two configs in a row, they won't conflict with each other.
Our working idea (probably not the best one) to implement this was to go about this by accepting the edit-config and then, within sysrepo, we would edit parts of our leafs with the success or failure flags and they would happen within the same session as the initial change. We were hoping this would keep everything atomic; by doing edits outside of the session, multiple configuration changes could conflict with each other.
We weren't sure to go about this with pure netconf or to leverage sysrepo directly. We noticed all these plugins/bindings made for sysrepo and figured those could be used directly to talk to our datastore.
But that said, our working idea is most likely not best-practice approach. What would be the best way to achieve this?
Our system is:
netopeer 1.1.27
sysrepo 1.4.58
libyang 1.0.167
libnetconf2 1.1.24
And our yang file is
module rxmbn {
namespace "urn:com:zug:rxmbn";
prefix rxmbn;
container rxmbn-config {
config true;
leaf raw {
type string;
}
leaf raw_hashCode {
type int32;
}
leaf odl_last_processed_hashCode {
type int32;
}
leaf processed {
type boolean;
default "false";
}
}
}
Currently we can:
Execute an edit-config to netopeer server
We can see the new config register in the sysrepo datastore
We can capture the moment sysrepo registers the data via sysrepo's API
But we are having problems
Atomically editing the datastore during the update session (due to locks, which is normal. In fact, if there is no way to edit during an update session, that is fine and not necessary. The main goal is the next bullet)
Atomically reacting to the new edit-config and responding to the end user
We are all a bit new to netconf and yang, so I am sure there is some way to leverage the notification api or event api either through the netopeer session or sysrepo, we just don't know enough yet.
If there are any examples or implementation advice to create an atomic transaction for this, that'd be really useful.
I know nothing of sysrepo so this is from a NETCONF perspective.
NETCONF severs process requests serially within a single session in a request-response fashion, meaning that everything you do within a single NETCONF session should already be "atomic" - you cannot send two requests and have them applied in reverse order or in parallel no matter what you do. A well behaved client would also wait for each response from the server before sending a new request, especially if all updates must execute successfully and in specific order. The protocol also defines no way to cancel a request already sent to a server.
If you need to prevent other sessions from modifying a datatstore while another session is performing a multi- edit-config, you use <lock> and <unlock> NETCONF operations to lock the entire datastore. There is also RFC5717 and partial lock, which would only lock a specific branch of the datastore.
Using notifications to report success of an <edit-config> would be highly unusual - that's what <rpc-reply> and <rpc-error> are there for within the same session. You would use notifications to inform other sessions about what's happening. In fact, there are standard base notifications for config changes.
I suggest reading the entire RFC6241 before proceeding further. There are things like candidate datastores, confirmed-commits, etc. you should know about.
Which component are you developing? Netconf client/manager or Netconf server?
In general, the Netconf server should implement individual Netconf RPC operations in an atomic way.
When a Netconf client wants to perform a set of operations in an atomic way, it should follow the procedure explained in Apendix E.1 in RFC 6241.
I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number