The ALSA MIDI sequencer API defines snd_seq_queue_status_get_tick_time, which allows to get a current tempo based time of a running MIDI queue.
I could periodically poll this information, but feel it would be cleaner to receive time events from the queue, either tempo based or not, callback based would be as much fine.
Is there a way to be notified of time events of a MIDI queue with ALSA sequencer, periodically and without polling?
— Edit —
When I said “polling”, I meant using a SIGALRM signal handler and a timer of a small enough resolution (1/50 seconds).
If you want to be notified at a specific time, schedule an event to be sent to yourself at that time.
For example, arecordmidi does this to synchronize the playback of its metronome pattern.
Related
Currently, we are polling the device to check if its up or not which includes thread to poll and sleep for some time and again poll. The current method does consume a lot of CPU cycles. Is there any better way to do that?.
Customers... Have to love them :)
I built out a web process that starts a live stream in Azure Media Services, but in testing I've seen a couple of times where the end user just closes the browser instead of clicking the end broadcast button I've so nicely set up for them.
The problem then is obvious, the stream keeps on running. Multiply this a few times and I've now got numerous live streams broadcasting nothing but I'm incurring costs.
Is there anything in the configuration in the portal (or even in the stream configuration: client.LiveEvents.CreateAsync(....) ) that can stop these services even if they close off their browser?
A few ways to approach this.
Your web application should prompt the user if they want to end the broadcast if they are closing the browser. This is a browser event that your web application can handle.
From the server side, you can monitor live events by subscribing to eventgrid events. 2 ways to do this as well. Please see the documentation on the eventgrid event schema to learn more about them.
You can either subscribe to the stream level "Microsoft.Media.LiveEventEncoderDisconnected" and monitor that no reconnection come in for a while to stop and delete your live event.
Or you can subscribe to the track level heartbeat events. If all tracks have incoming bitrate dropping to 0; or the last timestamp is no longer increasing, then you can also safely shut down the live event. The heartbeat events come in at every 20 seconds for every track so it could be a little bit verbose.
To learn more about how to subscribe to eventgrid events, you can read this documentation here
I've been using Akka's event stream in a Play app as an event bus where I can publish events and subscribe listeners and I wanted to know what are the gotchas I should take into account. Specifically there are two things:
Each Listener is implemented via an actor which receives the published events and processes them. What if the actor's message queue starts to get big? How can I implement back-pressure safely, guaranteeing that each event is eventually processed?
Related to the previous one: how can I persist the unprocessed events so, in the case of a failure the application can start again and process them? I'm aware of the existence of akka-persistence but I'm not sure if that would be the right thing to do in this case: the Listener actors aren't stateful, they don't need to replay past events, I only want to store unprocessed events and delete them once they have been processed.
Considering constraints I would not use Akka's event bus for this purpose.
Main reasons are:
Delivery - You have no guarantees that event listeners are in fact listening (no ACK). It's possible to lose some events on the way.
Persistance - There is no built in way of preserving event bus state.
Scaling - Akka's event bus is a local facility, meaning it's not suitable if in future you would like to create a cluster.
Easiest way to deal with that would be to use message queue such as RabbitMQ.
While back I was using sstone/amqp-client. MQ can provide you with persistent queues (queue for each listener/listener type).
an event is when you click on something, and code is run right away
polling is when the application constantly checks if your mouse button is held down, and if it's held down in a certain spot, code is run
do events really exist in computing, or is it all a layer built on polling?
This is a complicated question, and the answer depends on how far down you go (in abstraction layers) to answer it. Ultimately, your USB keyboard device is being polled once per millisecond by the computer to ask what keys are being held down. This information gets passed to the keyboard driver through a CPU interrupt when the USB device (in the computer) gets a packet of data from the keyboard. From then on, interrupts are used to pass the data from process to process (through the GUI framework) and eventually reach your application.
As Marc Cohen said in his answer, CPU interrupts are also raised to signal I/O completion. This is an example of something which has no polling until you get to the hardware level, where checks are performed (perhaps once per clock cycle? Someone with more experience with computer architecture should answer) to see if the event has taken place.
It's a common technique to simulate events by polling but that's often very inefficient and leads to a dilemma where you have a tradeoff between event resolution and polling overhead but that doesn't mean true events don't exist.
A CPU interrupt, which could be raised to signal an external event, like I/O completion, is an example of an event all the way down at the hardware layer.
Well, both operating system and application level depend on events not polling. Polling is usually possible where states cannot be maintained. On desktop applications and OS levels however, applications have states; so, they use events for their processes, not polling.
Background...
I am modifying Apple’s SimplePing example to do an ICMP ping for an iPhone app. The code wraps a native socket in a CFSocket object specifying a read callback, then adds it as a run loop source on the main thread. When a packet is sent to the socket, the callback is invoked to time the round trip, verify the contents, update the UI, etc.
Question...
What would be the best approach for moving this processing to a background thread so the ping time is as accurate as possible? I need to measure the precise time between the call to the socket “sendto” method and the callback invocation without interruption.
Any examples or pseudo code would be extremely helpful. I have done a lot of reading on threading in Cocoa (NSThread vs. NSOperation, NSRunLoop, etc.), but so far, I can’t quite piece it all together.
Thanks
Do you need to support iOS 3.x? If not, you could look into using Grand Central Dispatch; in this scenario, you would specify the socket as a source for a dispatch queue and give it the highest priority.