Virtual midi and VSTs - midi

I would like to make a simple VST plugin that does this :
analyze an audio stream (volume, beat, etc...)
has triggers on the analyzer's output (e.g. do something when volume > threshold)
generate MIDI events based on the triggers
This is to be able to chain plugins, even if they are not designed for it. For example I could control the gain of a compressor with the envelope of an audio stream, simply by connecting the MIDI OUT of my plugin to the MIDI IN of the compressor's gain button.
The problem is I don't know how to do this. Is there support for direct MIDI connections like this in VSTs ? Or maybe I need some sort of "virtual midi device" for interconnects ?

Your hunch here is probably correct; this task will be easier to accomplish by writing a virtual MIDI device instead of a VST plugin. It is possible to send MIDI events to a sequencer using the sendVstEventsToHost() call, but the problem is that the documentation never specifies how the host is required to react to these events. Many hosts simply ignore them, and I certainly can't think of one which allows easy routing from a plugin to a MIDI channel (maybe plogue bidule?).
You might be able to accomplish this with Audio Units with the kAudioUnitType_Generator plugin type... though I've never written such a plugin, my impression was that this is what you'd use to generate MIDI to the host. But again, the problem here is that I'm not sure how the host would allow you to route audio to the plugin and accept MIDI from it.
At any rate, your idea implemented as a plugin will be the most difficult to implement when you want to standardize its behavior for the most widely used sequencers. I think that a far easier way to accomplish what you want is to create a virtual MIDI device, as you'd thought of already, and then use rewire to route an input signal to your program.
Edit: Here's some resources on writing MIDI drivers for various systems:
Audio device driver programming in OS X
Windows MIDI driver API guide

VST plugins do not support direct midi connections, they can only have midi in/out ports.
It is still possible to do it though, you just need a host that supports routing midi from one plugin to another. Modular hosts such as EnergyXT, Bidule, AudioMulch and Console excel here. They all allow audio and midi signals to be routed freely (except no feedback paths). But it also may be possible in hosts with more 'traditional' mixer style vst racks. (For example, AFAIK Reaper will forward any midi from one plugin to the next.)

If you want to build your plugin in .NET take a look at VST.NET

Related

Adress external Hardware directly without driver?

Is it possible to access external hardware without using a driver, i.e. not having the driver abstraction layer in between program and external device?
Can you use a device by implementing your own driver-like controlling/handling directly in your program code?
I'm trying to understand a program that implements a Modbus protocol and some very specific Modbus configurations. Now I don't know how exactly it communicates with the Modbus devices.
It looks to me that this is very similar to what a driver does.
But can it even communicate DIRECTLY with the device without having a driver installed?
Yes, there are several micro-kernel OS's that always configure this way -- drivers are entirely implemented outside of the kernel.
The first thing you likely need is to get access to the device's registers; typically performed with mmap(), you may need to dig around a bit to find the right settings for cacheability, etc...
Second problem is interrupts. Unless you are running something like QNX, you won't have a way to have interrupts signal your program directly. You will probably have to turn them off and poll the device periodically.
If you are using linux and need io ports (inb, outb, etc...) man ioperm for more information.

Design a generic IR remote interface

Im wondering if it is possible to design an interface that can auto-determine which lirc.conf to use.
Currently, Im using a raspberry pi to control my TV by using the specific lirc.conf for my TV remote.
Now if I put the device in a different area with a different TV, how could I determine which conf to use?
Let's say we know some basic information like the brand of the TV, is there any potential to create a combined lirc that would work for all models of that brand?
Well, if you just have to handle a few remotes you can use all of the lirc.confs involved. lirc will try to decode the remote using each lirc.conf sequentially, using the first match.
The details is heavily depending on your lirc version. For modern lirc, see http://lirc.org/html/configuration-guide.html#appendix-8.
Another option is to use the kernel decoding. If your remotes are supported by the kernel, you could just use the devinput driver and lircd.conf file, trusting the kernel decoding which determines the remotes in runtime.

How to get rid from intermediate server and manage direct connection from IP-PBX to the streaming software?

I found this article:
www.codeproject.com/Articles/1077937/Possible-ways-to-organize-interaction-between-co,
and I know that there exist a code for the flash player.
Can I use only code for managing connections (as in the articles examples) and free flash player code and therefore get rid from integration software?
You need to be more specific but in reality the idea of the integration software is the following:
Session management
Multi-codec/format support
Interface Resource
Scheduling
Normally IP PBX supports SIP only, hence you need the transcoding between the SIP world (Audio + Video) and the Webcast world (Web browser/client/camera). Integration software as the one defined do a pretty good job and some of them are open Source (Wooza), If you want to replace them, I would do it with an MCU which support RMTP/Flash. Take a look at McuWeb project. Otherwise you need to write SIP Client code as well to integrate it with the SIP world

Live Streaming Service Development

I am about to develop a service that involves an interactive audio live streaming. Interactive in the sense that a moderator can have his stream paused and upon request, stream audio coming from one of his listeners (during the streaming session).
Its more like a Large Pipe where what flows through but the water can come in from only one of many small pipes connected to it at a time with a moderator assigned to each stream controlling which pipe is opened. I know nothing about media streaming, I dont know if a cloud service provides an interactive programmable solution such as this.
I am a programmer and I will be able to program the logic involved in such interaction. The issue is I am a novice to media streaming, don't have any knowledge if its technologies and various software used on the server for such purpose, are there any books that can introduce on to the technologies employed in media streaming, and I am trying to avoid using Flash,?
Clients could be web or mobile. I dont think I will have any problem with integrating with client system. My issue is implementing the server side
You are effectively programming a switcher. Basically, you need to be able to switch from one audio stream to the other. With uncompressed PCM, this is very simple. As long as the sample rates and bit depth are equal, cut the audio on any frame (which is sample-accurate) and switch to the other. You can resample audio and apply dithering to convert between different sample rates and bit depths.
The complicated part is when lossy codecs get involved. On a simlar project, I have gone down the road of trying to stitch streams together, and I can tell you that it is nearly impossible, even with something as simple as MP3. (The bit reservoir makes things difficult.) Plus, it sounds as if you will be supporting a wide variety of devices, meaning you likely won't be able standardize on a codec anyway. The best thing to do is take multiple streams and decode them at the mix point of your system. Then, you can switch from stream to stream easily with PCM.
At the output of your system, you'll want to re-encode to some lossy codec.
Due to latency, you don't typically want the server doing this switching. The switching should be done at the desk of the person encoding the stream so that way they can cue it accurately. Just write something that does all of the switching and encoding, and use SHOUTcast/Icecast for hosting your streams.

Interface between a DSP/Microcontroller and a PC application

I'm using a DSP to control a sensorless brushless DC motor, The DSP is on a board which has a parallel port and a jtag connection (it's an eZdspTMS320F2812). What would be the best way to communicate between a PC application and the DSP as it was running? Ideally I'd like to have a GUI program with buttons like start, stop, accelerate, decelerate... but I've never done anything like that before. Which ports and method would be easiest to use?
Thanks
You can also use simple RS232 communications. I use always because it`s cheap and easy to implement.
The RS232 transceivers are very cheap (like MAX232 from Maxim-IC), and easy to use. Also they come in many packages like DIP or SOIC for example and can be found almost every electronic shop.
You can use any USART from your microcontroller to link with MAX232. Then, using a PC serial-usb converter (or if your PC does have a serial port it`s easier), you can use serial port programming from any programming language to develop your desktop application.
After that, all you have to do is create a protocol to exchange data between your PC programm and your DSP (some simple commands to start, stop and change motor direction for example).
Good luck in your project.
The parallel port is probably the easiest route. Depending on what OS and programming language you are using you should be able to find example code or libraries to support bi-directional communication via the parallel port. Since you have a small set of commands that you might want to send to the DSP board then you can probably just send a single character to the board for each command, e.g. 'R' = start, 'S' = stop, etc.