Coffeescript clear timeout with args - coffeescript

Ive tried implementing solution 3 in this post http://evanhahn.com/smoothing-out-settimeout-in-coffeescript/ to create a delay function that I can pass arguments to. This works for me however I also need to be able to clear the timeout and I'm not sure how to do this?
timeout = 5000
func = (message) ->
console.log(message)
delay = (time, fn, args...) ->
setTimeout fn, time, args...
newEvent = {
id: 22,
delay: delay 5000, -> func("hi")
}
I would like to be able to do this or something equivalent:
clearTimeout(newEvent.delay)
I have also thought about using underscores delay function which allows passing and stopping easily however due to the max timeout length being 24 days I have to use https://www.npmjs.com/package/long-timeout

in order to clear a timeout you need to assign a variable to the timeout, and use that to clear it.
timeout = 5000
myTimeout = null
delay = (time, fn, args...) ->
myTimeout = setTimeout fn, time, args...
clearTimeout(myTimeout)
note that you'll need to declare the timeout in a scope outside delay function

What I finished with:
timeout = 5000
func = (message) ->
console.log(message)
delay = (time, fn, args...) ->
newEvent = {
id: 22,
delay: setTimeout fn, time, args...
}
delay 5000, -> func("hi")

Related

Limit API Calls to 40 per minute (Swift)

I have a limit of 40 URL Session calls to my API per minute.
I have timed the number of calls in any 60s and when 40 calls have been reached I introduced sleep(x). Where x is 60 - seconds remaining before new minute start. This works fine and the calls don’t go over 40 in any given minute. However the limit is still exceeded as there might be more calls towards the end of the minute and more at the beginning of the next 60s count. Resulting in an API error.
I could add a:
usleep(x)
Where x would be 60/40 in milliseconds. However as some large data returns take much longer than simple queries that are instant. This would increase the overall download time significantly.
Is there a way to track the actual rate to see by how much to slow the function down?
Might not be the neatest approach, but it works perfectly. Simply storing the time of each call and comparing it to see if new calls can be made and if not, the delay required.
Using previously suggested approach of delay before each API call of 60/40 = 1.5s (Minute / CallsPerMinute), as each call takes a different time to produce response, total time taken to make 500 calls was 15min 22s. Using the below approach time taken: 11min 52s as no unnecessary delay has been introduced.
Call before each API Request:
API.calls.addCall()
Call in function before executing new API task:
let limit = API.calls.isOverLimit()
if limit.isOver {
sleep(limit.waitTime)
}
Background Support Code:
var globalApiCalls: [Date] = []
public class API {
let limitePerMinute = 40 // Set API limit per minute
let margin = 2 // Margin in case you issue more than one request at a time
static let calls = API()
func addCall() {
globalApiCalls.append(Date())
}
func isOverLimit() -> (isOver: Bool, waitTime: UInt32)
{
let callInLast60s = globalApiCalls.filter({ $0 > date60sAgo() })
if callInLast60s.count > limitePerMinute - margin {
if let firstCallInSequence = callInLast60s.sorted(by: { $0 > $1 }).dropLast(2).last {
let seconds = Date().timeIntervalSince1970 - firstCallInSequence.timeIntervalSince1970
if seconds < 60 { return (true, UInt32(60 + margin) - UInt32(seconds.rounded(.up))) }
}
}
return (false, 0)
}
private func date60sAgo() -> Date
{
var dayComponent = DateComponents(); dayComponent.second = -60
return Calendar.current.date(byAdding: dayComponent, to: Date())!
}
}
Instead of using sleep have a counter. You can do this with a Semaphore (it is a counter for threads, on x amount of threads allowed at a time).
So if you only allow 40 threads at a time you will never have more. New threads will be blocked. This is much more efficient than calling sleep because it will interactively account for long calls and short calls.
The trick here is that you would call a function like this every sixty second. That would make a new semaphore every minute that would only allow 40 calls. Each semaphore would not affect one another but only it's own threads.
func uploadImages() {
let uploadQueue = DispatchQueue.global(qos: .userInitiated)
let uploadGroup = DispatchGroup()
let uploadSemaphore = DispatchSemaphore(value: 40)
uploadQueue.async(group: uploadGroup) { [weak self] in
guard let self = self else { return }
for (_, image) in images.enumerated() {
uploadGroup.enter()
uploadSemaphore.wait()
self.callAPIUploadImage(image: image) { (success, error) in
uploadGroup.leave()
uploadSemaphore.signal()
}
}
}
uploadGroup.notify(queue: .main) {
// completion
}
}

exit 0 not terminating program

I'm developing a client-server application in OCaml using the high-level network connection functions available in OCaml Unix library, following the steps available at https://caml.inria.fr/pub/docs/oreilly-book/html/book-ora187.html. These functions are:
val open_connection : sockaddr -> in_channel * out_channel
val shutdown_connection : in_channel -> unit
val establish_server : (in_channel -> out_channel -> unit) -> sockaddr -> unit
I'm able to successfully build the client and verifier but I cannot terminate the server using the exit OCaml function.
My (minimal) server code is the following:
let handle_service ic oc =
try while true do
...
if ... then raise Finish_interaction
done ;
with
| Finish_interaction -> raise Sys.Break
| _ -> ...
let main_server serv_fun =
if Array.length Sys.argv < 4 then ...
else try
let port = int_of_string Sys.argv.(1) in
...
let my_address = Unix.inet_addr_loopback in
Unix.establish_server serv_fun (Unix.ADDR_INET(my_address, port))
with
| Sys.Break -> exit 0 (* PROGRAM DOES NOT TERMINATE *)
| _ -> ...
let go_server () =
Unix.handle_unix_error main_server handle_service ;;
go_server ()
I can successfully catch the Sys.Break exception, but the exit 0 code after catching that exception does nothing and the server just keeps running and waiting for another client connection.
OCaml documentation says the following regarding establish_server:
The function Unix.establish_server never returns normally.
I don't know if this implies that I can never terminate the program without user interaction (via Ctrl + C, for example).
In a nutshell, how can I terminate my server? The client does terminate after shutdown_connection but the server keeps waiting for incoming connections. BTW, I'm compiling my code using OCamlbuild.
From the documentation of Unix.establish_server:
A new process is created for each connection
I recommend printing the process IDs (Unix.getpid ()) to make sure the process calling exit is the one you're expecting (the parent).
Another thing you can check is that the program is not stuck in the execution of an at_exit callback. For example, the following program enters an infinite loop during the call to exit:
let () =
at_exit (fun () -> while true do () done);
print_endline "all is well!";
exit 0
(probably not the problem you're having but could be useful to future visitors)
I managed to solve my problem, without having the need to send signals between processes.
I tweaked the establish_server function so that it doesn't create a loop that answers all incoming connections. By removing the loop, it will only answer one incoming connection, and then it simply ends its execution.
Here is the code for the new establish_server function:
let establish_server server_fun sockaddr =
let domain = Unix.domain_of_sockaddr sockaddr in
let sock = Unix.socket domain Unix.SOCK_STREAM 0
in Unix.bind sock sockaddr ;
Unix.listen sock 3;
(*while true do*)
let (s, caller) = Unix.accept sock
in match Unix.fork() with
0 -> if Unix.fork() <> 0 then exit 0 ;
let inchan = Unix.in_channel_of_descr s
and outchan = Unix.out_channel_of_descr s
in server_fun inchan outchan ;
close_in inchan ;
close_out outchan ;
exit 0
| id -> Unix.close s; ignore(Unix.waitpid [] id)
(*done ;;*)
Removing the two commented lines gives you the original version of it.
Thanks everyone for the answers!

How do I prevent BB8 connections to break after several repeats

I have an application that should use a shared connection pool for all requests. I observe that at seemingly-random times, requests fail with the error type "Closed". I have isolated this behavior into the following example:
use lazy_static::lazy_static;
use bb8_postgres::bb8::Pool;
use bb8_postgres::PostgresConnectionManager;
use bb8_postgres::tokio_postgres::{NoTls, Client};
lazy_static! {
static ref CONNECTION_POOL: Pool<PostgresConnectionManager<NoTls>> = {
let manager = PostgresConnectionManager::new_from_stringlike("dbname=demodb host=localhost user=postgres", NoTls).unwrap();
Pool::builder().build_unchecked(manager)
};
}
fn main() {
println!("Hello, world!");
}
#[cfg(test)]
mod test {
use super::*;
#[tokio::test]
async fn much_insert_traffic() {
much_traffic("INSERT INTO foo(a,b) VALUES (1, 2) RETURNING id").await
}
#[tokio::test]
async fn much_select_traffic() {
much_traffic("SELECT MAX(id) FROM foo").await
}
#[tokio::test]
async fn much_update_traffic() {
much_traffic("UPDATE foo SET a = 81 WHERE id = 1919 RETURNING b").await;
}
async fn much_traffic(stmt: &str) {
let c = CONNECTION_POOL.get().await.expect("Get a connection");
let client = &*c;
for i in 0..10000i32 {
let res = client.query_opt(stmt, &[]).await.expect(&format!("Perform repeat {} of {} ok", i, stmt));
}
}
}
When executing the tests, >50% one of the tests will fail in a later iteration with output similar to the following:
Perform repeat 8782 of UPDATE foo SET a = 81 WHERE id = 1919 RETURNING b ok: Error { kind: Closed, cause: None }
thread 'test::much_update_traffic' panicked at 'Perform repeat 8782 of UPDATE foo SET a = 81 WHERE
id = 1919 RETURNING b ok: Error { kind: Closed, cause: None }', src\main.rs:44:23
Turns out, the problem is completely predicated on the [tokio::test] annotation starting up a distinct runtime whenever a test is executed. The lazy static is initialized with one of these runtimes, and as soon as that runtime shuts down, the pool is destroyed. The other tests (with different runtimes) can use the value as long as the spawning test still runs, but are met with an invalid state once it has shut down.

Rewriting looping blocking code to SwiftNIO style non-blocking code

I'm working on a driver that will read data from the network. It doesn't know how much is in a response, other than that when it tries to read and gets 0 bytes back, it is done. So my blocking Swift code looks naively like this:
func readAllBlocking() -> [Byte] {
var buffer: [Byte] = []
var fullBuffer: [Byte] = []
repeat {
buffer = read() // synchronous, blocking
fullBuffer.append(buffer)
} while buffer.count > 0
return fullBuffer
}
How can I rewrite this as a promise that will keep on running until the entire result is read? After trying to wrap my brain around it, I'm still stuck here:
func readAllNonBlocking() -> EventLoopFuture<[Byte]> {
///...?
}
I should add that I can rewrite read() to instead of returning a [Byte] return an EventLoopFuture<[Byte]>
Generally, loops in synchronous programming are turned into recursion to get the same effect with asynchronous programming that uses futures (and also in functional programming).
So your function could look like this:
func readAllNonBlocking(on eventLoop: EventLoop) -> EventLoopFuture<[Byte]> {
// The accumulated chunks
var accumulatedChunks: [Byte] = []
// The promise that will hold the overall result
let promise = eventLoop.makePromise(of: [Byte].self)
// We turn the loop into recursion:
func loop() {
// First, we call `read` to read in the next chunk and hop
// over to `eventLoop` so we can safely write to `accumulatedChunks`
// without a lock.
read().hop(to: eventLoop).map { nextChunk in
// Next, we just append the chunk to the accumulation
accumulatedChunks.append(contentsOf: nextChunk)
guard nextChunk.count > 0 else {
promise.succeed(accumulatedChunks)
return
}
// and if it wasn't empty, we loop again.
loop()
}.cascadeFailure(to: promise) // if anything goes wrong, we fail the whole thing.
}
loop() // Let's kick everything off.
return promise.futureResult
}
I would like to add two things however:
First, what you're implementing above is to simply read in everything until you see EOF, if that piece of software is exposed to the internet, you should definitely add a limit on how many bytes to hold in memory maximally.
Secondly, SwiftNIO is an event driven system so if you were to read these bytes with SwiftNIO, the program would actually look slightly differently. If you're interested what it looks like to simply accumulate all bytes until EOF in SwiftNIO, it's this:
struct AccumulateUntilEOF: ByteToMessageDecoder {
typealias InboundOut = ByteBuffer
func decode(context: ChannelHandlerContext, buffer: inout ByteBuffer) throws -> DecodingState {
// `decode` will be called if new data is coming in.
// We simply return `.needMoreData` because always need more data because our message end is EOF.
// ByteToMessageHandler will automatically accumulate everything for us because we tell it that we need more
// data to decode a message.
return .needMoreData
}
func decodeLast(context: ChannelHandlerContext, buffer: inout ByteBuffer, seenEOF: Bool) throws -> DecodingState {
// `decodeLast` will be called if NIO knows that this is the _last_ time a decode function is called. Usually,
// this is because of EOF or an error.
if seenEOF {
// This is what we've been waiting for, `buffer` should contain all bytes, let's fire them through
// the pipeline.
context.fireChannelRead(self.wrapInboundOut(buffer))
} else {
// Odd, something else happened, probably an error or we were just removed from the pipeline. `buffer`
// will now contain what we received so far but maybe we should just drop it on the floor.
}
buffer.clear()
return .needMoreData
}
}
If you wanted to make a whole program out of this with SwiftNIO, here's an example that is a server which accepts all data until it sees EOF and then literally just writes back the number of received bytes :). Of course, in the real world you would never hold on to all the received bytes to count them (you could just add each individual piece) but I guess it serves as an example.
import NIO
let group = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
try! group.syncShutdownGracefully()
}
struct AccumulateUntilEOF: ByteToMessageDecoder {
typealias InboundOut = ByteBuffer
func decode(context: ChannelHandlerContext, buffer: inout ByteBuffer) throws -> DecodingState {
// `decode` will be called if new data is coming in.
// We simply return `.needMoreData` because always need more data because our message end is EOF.
// ByteToMessageHandler will automatically accumulate everything for us because we tell it that we need more
// data to decode a message.
return .needMoreData
}
func decodeLast(context: ChannelHandlerContext, buffer: inout ByteBuffer, seenEOF: Bool) throws -> DecodingState {
// `decodeLast` will be called if NIO knows that this is the _last_ time a decode function is called. Usually,
// this is because of EOF or an error.
if seenEOF {
// This is what we've been waiting for, `buffer` should contain all bytes, let's fire them through
// the pipeline.
context.fireChannelRead(self.wrapInboundOut(buffer))
} else {
// Odd, something else happened, probably an error or we were just removed from the pipeline. `buffer`
// will now contain what we received so far but maybe we should just drop it on the floor.
}
buffer.clear()
return .needMoreData
}
}
// Just an example "business logic" handler. It will wait for one message
// and just write back the length.
final class SendBackLengthOfFirstInput: ChannelInboundHandler {
typealias InboundIn = ByteBuffer
typealias OutboundOut = ByteBuffer
func channelRead(context: ChannelHandlerContext, data: NIOAny) {
// Once we receive the message, we allocate a response buffer and just write the length of the received
// message in there. We then also close the channel.
let allData = self.unwrapInboundIn(data)
var response = context.channel.allocator.buffer(capacity: 10)
response.writeString("\(allData.readableBytes)\n")
context.writeAndFlush(self.wrapOutboundOut(response)).flatMap {
context.close(mode: .output)
}.whenSuccess {
context.close(promise: nil)
}
}
func errorCaught(context: ChannelHandlerContext, error: Error) {
print("ERROR: \(error)")
context.channel.close(promise: nil)
}
}
let server = try ServerBootstrap(group: group)
// Allow us to reuse the port after the process quits.
.serverChannelOption(ChannelOptions.socket(.init(SOL_SOCKET), .init(SO_REUSEADDR)), value: 1)
// We should allow half-closure because we want to write back after having received an EOF on the input
.childChannelOption(ChannelOptions.allowRemoteHalfClosure, value: true)
// Our program consists of two parts:
.childChannelInitializer { channel in
channel.pipeline.addHandlers([
// 1: The accumulate everything until EOF handler
ByteToMessageHandler(AccumulateUntilEOF(),
// We want 1 MB of buffering max. If you remove this parameter, it'll also
// buffer indefinitely.
maximumBufferSize: 1024 * 1024),
// 2: Our "business logic"
SendBackLengthOfFirstInput()
])
}
// Let's bind port 9999
.bind(to: SocketAddress(ipAddress: "127.0.0.1", port: 9999))
.wait()
// This will never return.
try server.closeFuture.wait()
Demo:
$ echo -n "hello world" | nc localhost 9999
11

Force non blocking read with TcpStream

I've got a thread, that maintains a list of sockets, and I'd like to traverse the list, see if there is anything to read, if so - act upon it, if not - move onto the next. The problem is, as soon as I come across the first node, all execution is halted until something comes through on the read.
I'm using std::io::Read::read(&mut self, buf: &mut [u8]) -> Result<usize>
From the doc
This function does not provide any guarantees about whether it blocks waiting for data, but if an object needs to block for a read but cannot it will typically signal this via an Err return value.
Digging into the source, the TcpStream Read implementation is
impl Read for TcpStream {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> { self.0.read(buf) }
}
Which invokes
pub fn read(&mut self, buf: &mut [u8]) -> IoResult<uint> {
let fd = self.fd();
let dolock = || self.lock_nonblocking();
let doread = |nb| unsafe {
let flags = if nb {c::MSG_DONTWAIT} else {0};
libc::recv(fd,
buf.as_mut_ptr() as *mut libc::c_void,
buf.len() as wrlen,
flags) as libc::c_int
};
read(fd, self.read_deadline, dolock, doread)
}
And finally, calls read<T, L, R>(fd: sock_t, deadline: u64, mut lock: L, mut read: R)
Where I can see loops over non blocking reads until data has been retrieved or an error has occurred.
Is there a way to force a non-blocking read with TcpStream?
Updated Answer
It should be noted, that as of Rust 1.9.0, std::net::TcpStream has added functionality:
fn set_nonblocking(&self, nonblocking: bool) -> Result<()>
Original Answer
Couldn't exactly get it with TcpStream, and didn't want to pull in a separate lib for IO operations, so I decided to set the file descriptor as Non-blocking before using it, and executing a system call to read/write. Definitely not the safest solution, but less work than implementing a new IO lib, even though MIO looks great.
extern "system" {
fn read(fd: c_int, buffer: *mut c_void, count: size_t) -> ssize_t;
}
pub fn new(user: User, stream: TcpStream) -> Socket {
// First we need to setup the socket as Non-blocking on POSIX
let fd = stream.as_raw_fd();
unsafe {
let ret_value = libc::fcntl(fd,
libc::consts::os::posix01::F_SETFL,
libc::consts::os::extra::O_NONBLOCK);
// Ensure we didnt get an error code
if ret_value < 0 {
panic!("Unable to set fd as non-blocking")
}
}
Socket {
user: user,
stream: stream
}
}
pub fn read(&mut self) {
let count = 512 as size_t;
let mut buffer = [0u8; 512];
let fd = self.stream.as_raw_fd();
let mut num_read = 0 as ssize_t;
unsafe {
let buf_ptr = buffer.as_mut_ptr();
let void_buf_ptr: *mut c_void = mem::transmute(buf_ptr);
num_read = read(fd, void_buf_ptr, count);
if num_read > 0 {
println!("Read: {}", num_read);
}
println!("test");
}
}