I am working with an API that limits me to 40 requests per second, and 200 every 120 seconds.
I am currently designing an async pattern in Rust using reqwest and tokio. I want to incorporate the rate limiting constraints. I have done similar things in Python using Semaphores and have looked into semaphores in Rust, but am not quite sure how to structure my code.
Ideally, I'd like to:
Send in batches of 40 requests (never more than 40 per second)
Once I hit 200 requests and the timer hasn't hit 120 seconds. Stop and wait 120 seconds. Hitting 429 will incur a 120 second wait so the goal is to fill the bucket until that limit, then wait until I can begin sending requests again.
After all requests are finished, collect the responses in a Vec
Curious other thoughts and ideas on how best to handle this. I've read several other questions about this type of situation but haven't found something that works yet. Also am completely new to async-await in Rust so any refactoring advice helps.
The current async pattern is like the below:
use tokio::time::{ sleep, Duration };
use reqwest::header::HeaderMap;
async fn _make_requests(
headers: &HeaderMap,
requests: &Vec<String>
) -> Result<Vec<String>, Box<dyn std::error::Error>>
{
let client = reqwest::Client::new();
// Each req is a string URL which will pull back the response text from the API
for req in requests
{
let client = client.clone();
tokio::spawn(
match async move {
let resp = client.get(req)
.headers(headers.to_owned())
.send()
.await?
.text()
.await?;
Ok(resp)
}
.await
// Handle resp status in match
{
Ok(resp) => println!("{:?}", resp),
Err(e) => eprintln!("{}", e),
}
);
}
}
fn main()
{
// Create sample headers
let mut headers = HeaderMap::new();
headers.insert("Accept", "application/json".parse().unwrap());
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
// Run event loop until complete
rt.block_on(_make_requests(&headers, &requests));
Ok(())
}
Related
What's the best way to add a timeout to an awaiting function?
Example:
/// lets pretend this is in a library that I'm using and I can't mess with the guts of this thing
func fetchSomething() async -> Thing? {
// fetches something
}
// if fetchSomething() never returns then doSomethingElse() is never ran. Is there anyway to add a timeout to this system?
let thing = await fetchSomething()
doSomethingElse()
I wanted to make the system more robust in the case that fetchSomething() never returns. If this was using combine, I'd use the timeout operator.
One can create a Task, and then cancel it if it has not finished in a certain period of time. E.g., launch two tasks in parallel:
// cancel the fetch after 2 seconds
func fetchSomethingWithTimeout() async throws -> Thing {
let fetchTask = Task {
try await fetchSomething()
}
let timeoutTask = Task {
try await Task.sleep(nanoseconds: 2 * NSEC_PER_SEC)
fetchTask.cancel()
}
let result = try await fetchTask.value
timeoutTask.cancel()
return result
}
// here is a random mockup that will take between 1 and 3 seconds to finish
func fetchSomething() async throws -> Thing {
let duration: TimeInterval = .random(in: 1...3)
try await Task.sleep(nanoseconds: UInt64(TimeInterval(NSEC_PER_SEC) * duration))
return Thing()
}
If the fetchTask finishes first, it will reach the timeoutTask.cancel and stop it. If timeoutTask finishes first, it will cancel the fetchTask.
Obviously, this rests upon the implementation of fetchTask. It should not only detect the cancelation, but also throw an error (likely a CancellationError) if it was canceled. We cannot comment further without details regarding the implementation of fetchTask.
For example, in the above example, rather than returning an optional Thing?, I would instead return Thing, but have it throw an error if it was canceled.
I hesitate to mention it, but while the above assumes that fetchSomething was well-behaved (i.e., cancelable), there are permutations on the pattern that work even if it does not (i.e., run doSomethingElse in some reasonable timetable even if fetchSomething “never returns”).
But this is an inherently unstable situation, as the resources used by fetchSomething cannot be recovered until it finishes. Swift does not offer preemptive cancelation, so while we can easily solve the tactical issue of making sure that doSomethingElse eventually runs, if fetchSomething might never finish in some reasonable timetable, you have deeper problem.
You really should find a rendition of fetchSomething that is cancelable, if it is not already.
// You can use 'try and catch'. Wait for the fetch data inside the try block. When it fails the catch block can run a different statement. Something like this:
await getResource()
try {
await fetchData();
} catch(err){
doSomethingElse();
}
// program continues
I am trying to communicate between 2 microservices written in Rust and Node.js using Kafka.
I'm using actix-web as web framework and rdkafka as Kafka client for Rust. On the Node.js side, it queries stuff from the database and returns it as JSON to the Rust server via Kafka.
The flow:
Request -> Actix Web -> Kafka -> Node -> Kafka -> Actix Web -> Response
The logic is the request hits an endpoint on Actix Web, then creates a message to request something to another micro-service and waits until it sends back (verify by Kafka message key), and returns it to the user as an HTTP response.
I got it to work, but the performance is very slow (I am stress-testing with wrk).
I'm not sure why it's performing slow but as I was digging down, I found that if I add a delay on the Node.js side for 5 seconds and I create 2 requests to actix-web where the requests are different by a second, it will respond with a 5 and 10-second delay.
The benchmark is around 3k requests per second, using the following command:
wrk http://localhost:8080 -d 20s -t 2 -c 200
This makes me guess that something might be blocking the thread for each request.
Here is the source code and the repo:
use std::{
sync::Arc,
time::{
Duration,
Instant
}
};
use actix_web::{
App,
HttpServer,
get,
rt,
web::Data
};
use futures::TryStreamExt;
use tokio::time::sleep;
use num_cpus;
use rand::{
distributions::Alphanumeric,
Rng
};
use rdkafka::{
ClientConfig,
Message,
consumer::{
Consumer,
StreamConsumer
},
producer::{
FutureProducer,
FutureRecord
}
};
const TOPIC: &'static str = "exp-queue_general-5";
#[derive(Clone)]
pub struct AppState {
pub producer: Arc<FutureProducer>,
pub receiver: flume::Receiver<String>
}
fn generate_key() -> String {
rand::thread_rng()
.sample_iter(&Alphanumeric)
.take(8)
.map(char::from)
.collect()
}
#[get("/")]
async fn landing(state: Data<AppState>) -> String {
let key = generate_key();
let t1 = Instant::now();
let producer = &state.producer;
let receiver = &state.receiver;
producer
.send(
FutureRecord::to(&format!("{}-forth", TOPIC))
.key(&key)
.payload("Hello From Rust"),
Duration::from_secs(8)
)
.await
.expect("Unable to send message");
println!("Producer take {} ms", t1.elapsed().as_millis());
let t2 = Instant::now();
let value = receiver
.recv()
.unwrap_or("".to_owned());
println!("Receiver take {} ms", t2.elapsed().as_millis());
println!("Process take {} ms\n", t1.elapsed().as_millis());
value
}
#[get("/status")]
async fn heartbeat() -> &'static str {
// ? Concurrency delay check
sleep(Duration::from_secs(1)).await;
"Working"
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
// ? Assume that the whole node is just Rust instance
let mut cpus = num_cpus::get() / 2 - 1;
if cpus < 1 {
cpus = 1;
}
println!("Cpus {}", cpus);
let producer: FutureProducer = ClientConfig::new()
.set("bootstrap.servers", "localhost:9092")
.set("linger.ms", "25")
.set("queue.buffering.max.messages", "1000000")
.set("queue.buffering.max.ms", "25")
.set("compression.type", "lz4")
.set("retries", "40000")
.set("retries", "0")
.set("message.timeout.ms", "8000")
.create()
.expect("Kafka config");
let (tx, rx) = flume::unbounded::<String>();
rt::spawn(async move {
let consumer: StreamConsumer = ClientConfig::new()
.set("bootstrap.servers", "localhost:9092")
.set("group.id", &format!("{}-back", TOPIC))
.set("queued.min.messages", "200000")
.set("fetch.error.backoff.ms", "250")
.set("socket.blocking.max.ms", "500")
.create()
.expect("Kafka config");
consumer
.subscribe(&vec![format!("{}-back", TOPIC).as_ref()])
.expect("Can't subscribe");
consumer
.stream()
.try_for_each_concurrent(
cpus,
|message| {
let txx = tx.clone();
async move {
let result = String::from_utf8_lossy(
message
.payload()
.unwrap_or("Error serializing".as_bytes())
).to_string();
txx.send(result).expect("Tx not sending");
Ok(())
}
}
)
.await
.expect("Error reading stream");
});
let state = AppState {
producer: Arc::new(producer),
receiver: rx
};
HttpServer::new(move || {
App::new()
.app_data(Data::new(state.clone()))
.service(landing)
.service(heartbeat)
})
.workers(cpus)
.bind("0.0.0.0:8080")?
.run()
.await
}
I found some solved issues on GitHub which recommended using actors instead which I also did as a separate branch.
This has worse performance than the main branch, performing around 200-300 requests per second.
I don't know where the bottleneck is or what's the thing that blocking the request.
I am trying to code around the fact that the messageLostHandler doesn't fire for many minutes after a device is out of range using Audio (or Earshot for Android).
I was hoping that every few secs a message would be received from another device. It fires once. Is this expected? Since I can't rely on the messageLost handler - how do I know when a device is truly out of range of the ultrasonic?
I coded up a timer after receiving the subscriptionWithMessageFoundHandler hoping another message coming in I could just invalidate or restart the timer. If the timer fired, I'd know that x seconds passed and that the other device must be out of range. No such luck.
UPDATE: Here is the code in question:
let strategy = GNSStrategy.init(paramsBlock: { (params: GNSStrategyParams!) -> Void in
params.discoveryMediums = .Audio
})
publication = messageMgr.publicationWithMessage(pubMessage, paramsBlock: { (pubParams: GNSPublicationParams!) in
pubParams.strategy = strategy
})
subscription = messageMgr.subscriptionWithMessageFoundHandler({[unowned self] (message: GNSMessage!) -> Void in
self.messageViewController.addMessage(String(data: message.content, encoding:NSUTF8StringEncoding))
// We only seem to get a 1x notification of a message. So this timer is folly.
print("PING") //Only 1x per discovery.
}, messageLostHandler: {[unowned self](message: GNSMessage!) -> Void in
self.messageViewController.removeMessage(String(data: message.content, encoding: NSUTF8StringEncoding))
}, paramsBlock: { (subParams: GNSSubscriptionParams!) -> Void in
subParams.strategy = strategy
})
Notice that the "PING" only prints once.
When a device goes out of range, Nearby waits for 2 minutes before flushing the other device's token from its cache. So if you wait for 2 minutes, the messageLost handler should be called. Can you verify this? Also, is it safe to assume that you'd like to have a timeout shorter than 2 minutes? This timeout has been a topic of discussion, and there's been some talk of adding a parameter so apps can choose a value that's more appropriate for its use case.
I'm developing a simple REST application that leverages on RxJava to send requests to a remote server (1). For each incoming request to the REST API a request is sent (using RxJava and RxNetty) to (1). Everything is working fine but now I have a new use case:
In order to not bombard (1) with too many request I need to implement rate limiting. One way to solve this (I assume) would be to add each Observable created when sending a request to (1) into another Observable (2) that does the actual rate-limiting. (2) will then act more or less like a queue and process the outbound requests as fast as possible (but not faster than the rate limit). Here's some pseudo-like code:
Observable<MyResponse> r1 = createRequestToExternalServer() // In thread 1
Observable<MyResponse> r2 = createRequestToExternalServer() // In thread 2
// Somehow send r1 and r2 to the "rate limiter" observable, (2)
rateLimiterObservable.sample(1 / rate, TimeUnit.MILLISECONDS)
How would I use Rx/RxJava to solve this?
I'd use a hot timer along with an atomic counter that keeps track the remaining connection for the given duration:
int rate = 5;
long interval = 1000;
AtomicInteger remaining = new AtomicInteger(rate);
ConnectableObservable<Long> timer = Observable
.interval(interval, TimeUnit.MILLISECONDS)
.doOnNext(e -> remaining.set(rate))
.publish();
timer.connect();
Observable<Integer> networkCall = Observable.just(1).delay(150, TimeUnit.MILLISECONDS);
Observable<Integer> limitedNetworkCall = Observable
.defer(() -> {
if (remaining.getAndDecrement() != 0) {
return networkCall;
}
return Observable.error(new RuntimeException("Rate exceeded"));
});
Observable.interval(100, TimeUnit.MILLISECONDS)
.flatMap(t -> limitedNetworkCall.onErrorReturn(e -> -1))
.take(20)
.toBlocking()
.forEach(System.out::println);
I am trying to build a service that grab some pages from another web service and process the content and return results to users. I am using Play 2.2.3 Scala.
val aas = WS.url("http://localhost/").withRequestTimeout(1000).withQueryString(("mid", mid), ("t", txt)).get
val result = aas.map {
response =>
(response.json \ "status").asOpt[Int].map {
st => status = st
}
(response.json \ "msg").asOpt[String].map {
txt => msg = txt
}
}
val rs1 = Await.result(result, 5 seconds)
if (rs1.isDefined) {
Ok("good")
}
The problem is that the service will wait 5 seconds to return "good" even the WS request takes 100 ms. I also cannot set Await time to 100ms because the other web service I am requesting may take between 100ms to 1 second to respond.
My question is: is there a way to process and serve the results as soon as they are ready instead of wait a fixed amount of time?
#wingedsubmariner already provided the answer. Since there is no code example, I will just post what it should be:
def wb = Action.async{ request =>
val aas = WS.url("http://localhost/").withRequestTimeout(1000).get
aas.map(response =>{
Ok("responded")
})
}
Now you don't need to wait until the WS to respond and then decide what to do. You can just tell play to do something when it responds.