How to understand the seemingly default/implicit ALSA MIDI default queue? - midi

Background
In the snippet below, I send a MIDI note with a duration to a synthesizer port. For this to work, I need to allocate and start a queue, or else I get an “Invalid argument.” (error code -22) from ALSA.
The operations to send the event, proper, makes no reference to the queue, which seems to be referred to, implicitly. However, a MIDI application may have multiple queues, and I wonder about it too.
In my understanding, the queue is not like a buffer and is rather needed to manage timed events (the reason why a queue is needed to send a note with a duration), so I understand one is required to send some event with a particular presentation (with a start and/or end time).
Questions
How is the seemingly default queue determined, when sending an event makes no explicit reference to a queue?
In the context of the above question, what happens precisely when an application creates multiple queues? Is the implicit one, the first one?
Is there already a default queue even before I create one, which I could start instead of a (and sole) newly created one?
Annexe
The snippet mentioned above:
static void test_send(void) {
snd_seq_event_t ev;
int queue = snd_seq_alloc_queue(seq);
check_error(queue, "snd_seq_alloc_queue");
snd_seq_start_queue(seq, queue, NULL);
snd_seq_ev_clear(&ev);
snd_seq_ev_set_note(&ev, 0, 64, 127, 1);
snd_seq_ev_set_source(&ev, out_port);
snd_seq_ev_set_dest(&ev, synth_addr.client, synth_addr.port);
int status = snd_seq_event_output_direct(seq, &ev);
check_error(status, "snd_seq_event_output_direct");
snd_seq_free_queue(seq, queue);
}

There is no default queue.
An event always must specify the queue through which it will be sent (this can be done with snd_seq_ev_schedule_tick() or snd_seq_ev_schedule_real()), or it must set the queue field to SND_SEQ_QUEUE_DIRECT to specify that no queue is to be used (this can be done with snd_seq_ev_set_direct()).
When you forget to set the queue field of the event, it stays at zero, which happens to be the number of your queue.

Related

Priority-based queuing in queue block does not work

I am trying to use priority based queuing in my queue block. My process is as follows:
source, wait, queue, packing_machine
On exit of the wait block, the agent gets assigned a priority in agent.atrPriority. In queue I have selected Queuing: Priority based and at Agent priority I use: agent.atrPriority.
By printing to the console I am checking if the sequence in which the agents enter the packing_machine block is correct (according to their priority), but it isn't. It keeps sending the agents from queue to packing_station on a FIFO basis.
I have tried assiging agent.atrPriority at different places in the model, but I do not think that that is the problem. i have also tried using agents comparison with agent1.atrPriority.before(agent2.atrPriority); but it gives the error ' Cannot invoke before(int) on the primitive type int.
Does anyone know why it is not working accordingly?
The queue is working, so it is not a bug.
Try a quick test: put another delay between the Wait and the Queue. Set the delay duration to be 0.0001 sec or something tiny.
If this fixes it, the culprit is that you change the atrPriority field "on Exit" of Wait, which is effectively too late. It basically changes after the downstream Queue accesses the priority value.
Another option: change the atrPriority value before you call wait.free(...). This way, you can be sure the priority is set to the right value before the agent enters the queue

What's the read logic when I call recvfrom() function in C/C++

I wrote a C++ program to create a socket and bind on this socket to receive ICMP/UDP packets. The code I wrote as following:
while(true){
recvfrom(sockId, rePack, sizeof(rePack), 0, (struct sockaddr *)&raddr, (socklen_t *)&len);
processPakcet(recv_size);
}
So, I used a endless while loop to receive messages continually, But I worried about the following two questions:
1, How long the message would be kept in the receiver queue or say in NIC queue?
I worried about that if it takes too long to process the first message, then I might miss the second message. so how fast should I read after read.
2, How to prevent reading the duplicated messages?
i.e, does the receiver queue knows me, when my thread read the first message done, would the queue automatically give me the second one? or say, when I read the first message, then the first message would be deleted by the queue and no one could receive it again.
Additionally, I think the while(true) module is not good, anyone could give me a good suggestion please. (I heard something like polling module).
First, you should always check the return value from recvfrom. It's unlikely the recvfrom will fail, but if it does (for example, if you later implement signal handling, it might fail with EINTR) you will be processing undefined data. Also, of course, the return value tells you the size of the packet you received.
For question 1, the actual answer is operating system-dependent. However, most operating systems will buffer some number of packets for you. The OS interrupt handler that handles the incoming packet will never be copying it directly into your application level buffer, so it will always go into an OS buffer first. The OS has previously noted your interest in it (by virtue of creating the socket and binding it you expressed interest), so it will then place a pointer to the buffer onto a queue associated with your socket.
A different part of the OS code will then (after the interrupt handler has completed) copy the data from the OS buffer into your application memory, free the OS buffer, and return to your program from the recvfrom system call. If additional packets come in, either before or after you have started processing the first one, they'll be placed on the queue too.
That queue is not infinite of course. It's likely that you can configure how many packets (or how much buffer space) can be reserved, either at a system-wide level (think sysctl-type settings in linux), or at the individual socket level (setsockopt / ioctl).
If, when you call recvfrom, there are already queued packets on the socket, the system call handler will not block your process, instead it will simply copy from the OS buffer of the next queued packet into your buffer, release the OS buffer, and return immediately. As long as you can process incoming packets roughly as fast as they arrive or faster, you should not lose any. (However, note that if another system is generating packets at a very high rate, it's likely that the OS memory reserved will be exhausted at some point, after which the OS will simply discard packets that exceed its resource reservation.)
For question 2, you will receive no duplicate messages (unless something upstream of your machine is actually duplicating them). Once a queued message is copied into your buffer, it's released before returning to you. That message is gone forever.
(Note that it's possible that some other process has also created a socket expressing interest in the same packets. That process would also get a copy of the packet data, which is typically handled internal to the operating system by reference counting rather than by actually duplicating the OS buffers, although that detail is invisible to applications. In any case, once all interested processes have received the packet, it will be discarded.)
There's really nothing at all wrong with a while (true) loop; it's a very common control structure for long-running server-type programs. If your program has nothing else it needs to be doing in the meantime, while true allowing it to block in recvfrom is the simplest and hence clearest way to implement it.
(You could use a select(2) or poll(2) call to wait. This allows you to handle waiting for any one of multiple file descriptors at the same time, or to periodically "time out" and go do something else, say, but again if you have nothing else you might need to be doing in the meantime, that is introducing needless complication.)

Swift: Accessibility: How can I queue events to be executed in sequence?

I want to know if there is a way to queue up accessibility readouts or element focus events one after another.
If I use either: UIAccessibilityPostNotification(UIAccessibilityAnnouncementNotification, "My Error Message")
or:
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, self.continueButton)
The second call will interrupt the readout that is currently being read.
And obviously, if you use Dispatch with Delay, it's not robust, because different languages have different lengths of content, and also the user has a different readout speed set, which may be set to very slow. So how can I "queue up" multiple focus/read out events and ensure that only one of them gets read out at a time in sequence?
After you post your first announcement you need to wait for UIAccessibilityAnnouncementDidFinishNotification (see more here) before you post the 2nd one.
So build a queue (a set could do it) and whenever UIAccessibilityAnnouncementDidFinishNotification is triggered by the system just pop the first notification in your set (if present) and fire it away.

Abort socket operation Windows Phone

I am using pseudo-synchronous sockets in a Windows Phone 7 application. My socket code is based on the sample from http://msdn.microsoft.com/en-us/library/hh202858(v=vs.92).aspx.
The server's sending pattern is somewhat unpredictable. It starts with a fixed-size header that contains the length of the rest of the message. I first read in this header, and then I read the specified number of bytes from the socket.
Since I need to send messages to the server as well, and my attempts at duplexing the socket with a thread for receiving and another thread for sending caused lots of problems, I have a loop like this in my code:
while (KeepConnectionGoing)
{
byte[] Rcvd;
Rcvd = Socket.Receive();//Returns null if no message received in 50 ms
if (Rcvd != null)
{
ParseMessage(Rcvd);
}
if (HasMessageThatNeedsToBeSent())
{
byte[] Message = GetMessageToSend();
Socket.Send(Message);
}
}
This works fine for the majority of the time, but strange things happen when the message is null.
Because the timeout in the Receive method (see the linked sample) uses a ManualResetEvent, the receive request on the socket is never actually cancels. Even though the method returns, that request waits around somewhere, and when data is available on the socket, chomps up the header. The event handler has nothing to do with the data it received (since the method has returned and the variables in the method will never be used again), the data basically disappears. The read request I expect to return the header skips reads the bytes after the header, and I have no idea how long the message is.
I'd like to be able to cancel all outstanding requests if the socket times out. I am using anonymous methods like in the sample since it simplifies everything and prevents me from having to write all the state transfer code myself. Thus, I cannot unhook the event handler. I think though, that even if I were using a method as the event handler, but unhooking before the asynchronous operation is done, the callback method would still be called. (I haven't tested this, it's just my understanding)
Right now, the only solution I can see is hacking together some static byte arrays (ie. having a static byte[] Header and if it is null, I read the header, otherwise I read the message), but that seems like a really inelegant solution and very prone to race conditions.
Is there a better way?
Thanks
It appears there really is no good way to do this. A poll method would be nice, but Silverlight doesn't have it. I hacked together a solution using static flags to tell me what state I am in (Has the header been requested, has the message been requested), a static int for the length and a static buffer.
At the beginning of the method, either the header or the body can be requested. If the header has already been requested, the thread waits until a valid body length is available. If this wait times out, that means that the header receive operation is still pending, but there really is no message available. Otherwise, it reads in that length of a message.
If the header has not been requested, receive the header. In the event handler, after completion, check to see if the control flow has already continued (i.e. the receive operation took too long, so the function returned already, but is now actually done). Update the length, then request the body unless it timed out.

What is the difference between GCD Dispatch Sources and select()?

I've been writing some code that replaces some existing:
while(runEventLoop){
if(select(openSockets, readFDS, writeFDS, errFDS, timeout) > 0){
// check file descriptors for activity and dispatch events based on same
}
}
socket reading code. I'd like to change this to use a GCD queue, so that I can pop events on to the queue using dispatch_async instead of maintaining a "must be called on next iteration" array. I also am already using a GCD queue to /contain/ this particular action, hence wanting to devolve it to a more natural GCD dispatch form. ( not a while() loop monopolizing a serial queue )
However, when I tried to refactor this into a form that relied on dispatch sources fired from event handlers tied to DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE on the socket descriptors, the library code that depended on this scheduling stopped working. My first assumption is that I'm misunderstanding the use of DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE - I had assumed that they would yield roughly the same behavior as calling select() with those socket descriptors.
Do I misunderstand GCD dispatch sources? Or, regarding the refactor, am I using it in a situation where it is not best suited?
The short answer to your question is: none. There are no differences, both GCD dispatch sources and select() do the same thing: they notify the user that a specific kernel event happened or that a particular condition holds true.
Note that, on a mac or iOS device you should not use select(), but rather the more advanced kqueue() and kevent() (or kevent64()).
You may certainly convert the code to use GCD dispatch sources, but you need to be careful not to break other code relying on this. So, this needs a complete inspection of the whole code handling signals, file descriptors, socket and all of the other low level kernel events.
May be a simpler solution could be to maintain the original code, simply adding GCD code in the part that react to events. Here, you dispatch events on different queues depending on the particular type of event.