Listen to MIDI events from AKSequencer - swift

AudioKit 4.9. I would like to receive MIDI events from the new AKSequencer, handle them and send the handled data to an external gear connected to a midi port of an audio interface like I can do with AKMIDICallbackInstrument set to an output of AKAppleSequencer. Investigating the source code I've figured out that there are two ways to receive MIDI from AKSequencerEngineDSPKernel (an internal sequencer in AKSequencer) : in AudioUnit and via MIDIEndpoint. I couldn't find any AudioUnit alternative for AKMIDICallbackInstrument, thus I wonder if there is any way to pass a value to midiEndpoint of AKSequencerEngineDSPKernel to correspond the AKMIDICallbackInstrumen.midiIn? It's possible that I missed something, please explain a better way to solve this problem.

Related

How to get serial port data in an event form in flutter windows desktop application

I am making a windows application which communicates with a custom hardware over serial port (Sends and receives data to/from the device, processes it and displays it on relevant widgets).
I am stuck at the receive part of data. I want a function to be called whenever data is received on a serial port without having to check for it constantly. I am using the serial_port_win32 package to communicate with my device.
As far as I understood the package, there is no way to subscribe to serial port receive event and call a relevant function.
I could make an function which gets called by a timer event and reads serial port data, but this does not sound like a good solution.
I have looked at dart stream option. The way I understood it is that, the Stream function's definition will have a code to check if there is some data available available on serial port and yield it, if there is any. But there too I will have to provide some timeout in the read function in case there is no data.
All of the solutions I could think of are not really "event trigger on their type". I am sure there would some elegant way of doing this.

Catch audio stream in freeswitch

Given a sip call between two persons using freeswitch as my telephony engine ,how to catch audio stream of each person separately and process it before it's sent to the other end. Thanks for your help in advance.
The only possible way i can think of is, Set up two conference. Originate a call to A and connect to Conf A on answer. call B and connect to Conf B.
Now if A speaks you can record the call and convert to text - translate and convert to audio and play it to Conference B. Vice versa.
ESL is a powerful module in Freeswitch where you can able to get all the events of freeswitch application and play with. In conference you get events when a member speaks, Joins, leaves, Mute and so on. Its an Idea but i've not tried it.
Its like http://www.iamili.com/ that you gonna try :)

Application-defined events in RTP

I'm researching the ability to send custom timestamped data events over RTP. An example might be a sequence of chat messages that should remain synchronized with whatever audio/video is being streamed. These messages have no intrinsic audio or video interpretation; it would be up to the client software to do something appropriate (add them to a chat log, etc).
I found some evidence that people accomplish this with a custom RTP codec. I also saw some talk of custom RTP payloads. Any light that can be shed here would be appreciated.
I would also be interested in hearing about possible implementations outside of RTP.
For transport of custom data per RTP it is probably best to use a custom unassigned payload type (see list at http://www.iana.org/assignments/rtp-parameters/rtp-parameters.xml). A more flexible approach would use a dynamic payload type assignment (ref, RFC 3551).
The sending side would set up the RTP header (ref. https://www.rfc-editor.org/rfc/rfc3550#section-5.1) with this payload type and the time stamp from the real time media frame you want to be in sync with.
On the receiving end you would dispatch handling of the RTP data based on the payload type in header of the received RTP packet. The handling should probably allow for little bit of latency between arrival of the media and the custom packet and then (dis)play both together ...
If you are working in Java, you can probably build your applciation based on the architecture and abstractions provided by the JMF (http://www.oracle.com/technetwork/java/javase/tech/index-jsp-140239.html).

Using an writestream directly to an IP and port in ios

I'm trying to create an app that is able to send a .pdf-file directly to a printer from an iPhone. I'd like to create a raw socket connection. I should be able to open a stream to a specific IP-adress and a port. Then I'd like to throw that .PDF-file into the stream so the printer (or my server on the computer) receives it.
I've made a stream already using ftp. Ofcourse, printers don't handle ftp-protocols. That is why I want to send the data in a raw stream to the device's port.
Any idea how I can accomplish this?
CocoaAsyncSocket is really easy to use and has good documentation. It is an objective-c wrapper around lower level socket primitives. It sounds like all you need is to write data out to a socket. If so then it is the easiest way to go.

Virtual midi and VSTs

I would like to make a simple VST plugin that does this :
analyze an audio stream (volume, beat, etc...)
has triggers on the analyzer's output (e.g. do something when volume > threshold)
generate MIDI events based on the triggers
This is to be able to chain plugins, even if they are not designed for it. For example I could control the gain of a compressor with the envelope of an audio stream, simply by connecting the MIDI OUT of my plugin to the MIDI IN of the compressor's gain button.
The problem is I don't know how to do this. Is there support for direct MIDI connections like this in VSTs ? Or maybe I need some sort of "virtual midi device" for interconnects ?
Your hunch here is probably correct; this task will be easier to accomplish by writing a virtual MIDI device instead of a VST plugin. It is possible to send MIDI events to a sequencer using the sendVstEventsToHost() call, but the problem is that the documentation never specifies how the host is required to react to these events. Many hosts simply ignore them, and I certainly can't think of one which allows easy routing from a plugin to a MIDI channel (maybe plogue bidule?).
You might be able to accomplish this with Audio Units with the kAudioUnitType_Generator plugin type... though I've never written such a plugin, my impression was that this is what you'd use to generate MIDI to the host. But again, the problem here is that I'm not sure how the host would allow you to route audio to the plugin and accept MIDI from it.
At any rate, your idea implemented as a plugin will be the most difficult to implement when you want to standardize its behavior for the most widely used sequencers. I think that a far easier way to accomplish what you want is to create a virtual MIDI device, as you'd thought of already, and then use rewire to route an input signal to your program.
Edit: Here's some resources on writing MIDI drivers for various systems:
Audio device driver programming in OS X
Windows MIDI driver API guide
VST plugins do not support direct midi connections, they can only have midi in/out ports.
It is still possible to do it though, you just need a host that supports routing midi from one plugin to another. Modular hosts such as EnergyXT, Bidule, AudioMulch and Console excel here. They all allow audio and midi signals to be routed freely (except no feedback paths). But it also may be possible in hosts with more 'traditional' mixer style vst racks. (For example, AFAIK Reaper will forward any midi from one plugin to the next.)
If you want to build your plugin in .NET take a look at VST.NET