My electronic keyboard is periodically sending MIDI's real time clock messages, which I would like to use as a metronome in a program's of mine that sends out MIDI events to the keyboard (the purpose of this program is auto-accompaniment based on score). I get 6 such messages per quarter-note. The only thing is I couldn't find a way to set the keyboard tempo (in BPM) programmatically, that is by sending a set-tempo MIDI message from my program to the keyboard. Such a kind of message is only supported in a MIDI file and probably cannot be send on the wire. How can change the clock frequency without this feature? Changing it manually on the keyboard is unpractical.
PS: I'm on Linux and am using blocking ALSA's snd_rawmidi_read to read bytes from the keyboard in a loop, so to synchronize my program.
Related
Our application uses the Twilio voice SDKs for iOS, Android, and Web. Our use case relies on precise device synchronization and time stamping. We are playing an audio stream on multiple adjacent devices (in a Twilio conference call) and we need that audio playback to be in sync. Most of the time, it works great, but every now-and-then, one of the devices falls a little bit behind and throws off the whole experience. We want to detect when a device is falling behind (receiving packets late) so we can temporarily mute it so it does not throw off the user experience we are going for.
We believe that Twilio voice uses real time communication (web RTC) and real-time transport protocol (RTP) under the hood. We also believe RTP has time stamping information for when packets are sent out and when packets are received.
We are looking for any suggestions for how we might read this timestamp information (both sent & received) to detect device synchronization issues.
Our iOS and Android clients are built using Flutter & Dart, so any way to look at this packet information using Dart would be great. If not, we can use native channels through Swift and Kotlin. For the web, we would need a way to look at this timestamp data using javascript.
If possible, we'd like to access this information through the SDK. I don't see anything about timestamps in Twilio's voice documentation. So, if not possible, we might have to sniff for packets on the devices? This way, we could look at the RTP packets coming from Twilio to see what information is available. As long as this does not break Twilio terms of service, of course :)
Even if you could get this information I don't think it will be useful. The timestamp field in RTP has little to do with real time. In voice it's actually a sample offset into the audio stream. With a typical narrowband codec with a fixed bit rate and no silence suppression it's completely predictable from the RTP sequence number. For example, with 20ms packets of G.711 it will increment by exactly 160 each packet.
RTP receivers expect there to be random variation between the receipt time of a packet and its timestamp - known as jitter. This is introduced by delays at the sender, in the network and at the receiver. This is why receivers use jitter buffers to reduce the likelihood of buffer underrun on playing. The definition of jitter for RTCP - the interarrival jitter - is a calculation that measures this. That is - the variation between the (predictable) RTP timestamp and the measured wallclock time at the receiver.
Maybe you need something more like an NTP protocol between your client and your server.
I want to collect the activity times from a machine which has no Interfaces to record the running Actions via wire. But I could put a PC with microphone close to that machine and listen to the microphone. If a certain noise Level is reached would indicate the activity, then record the timestamp until noise disappears.
Does anyone have an idea whether this is possible in Powershell and how to do it?
I've got a guitar amp with a midi interface. I'm planning to see what's possible with the device that hasn't been built-in by the manufacturer. Since I have no experience with MIDI I'd like to know if it's possible to ruin a MIDI device by sending wrong data.
I'm not sure what data I'd like to send, and the device is basically a black box without documentation, so I can't give much more details. But one thing I'd like to attempt is overwriting the built-in effects.
MIDI commands are parsed and executed by the device's firmware.
Whatever effect(s) a command has is determined by what the firmware is programmed to do when it receives that command.
Typically, unknown commands are ignored, so it should not be possible to ruin a device by sending random data.
Most devices do no have any permanent state.
However, some devices allow upgrading their firmware through MIDI, so if you use the correct SysEx command, and manage to get any checksums correct, it would be possible to replace the original firmware with your own code (or some non-code that prevents it from working).
I am developing an iOS application that receives data through the auxiliary port (microphone).
We got oscilloscopes hooked up and are at the point where we can measure frequencies and amplitudes on a testing iPhone.
However, even with the auxiliary cable connected, the iPhone still listens to the internal microphone in addition to our external AUX input thus watering down our measurements.
The iPhone definitely recognizes the connected AUX cable (internal speakers are turned off).
Is there any way to programmatically disable the built-in microphone?
or
Is there some special signal we can send through the AUX port to disable the internal microphone?
After much research on this topic, there is no way to do it at this moment in time.
If you look at the Audio Session Programming Guide and the AVCaptureDevice Class Reference, all the properties relating to the devices input sources and audio routes are readonly.
If it's of any use, you can detect whether or not headphones or an external mic are plugged in. Here's a question relating to that.
I don't believe you can disable the built-in microphone without the user physically pressing the silent switch, but maybe you could store the data recorded by the built-in mic and then filter it out of the measurements taken by the oscilloscopes? I don't know how you would go about implementing this; it's just a theory.
Hope this helps!
I need to receive data periodically through a BlueTooth External Accessory.
I implemented an event-driven model of EA's streams. However, the initial transmission from bluetooth is always delayed. For example, if each packet was 15 bytes long, the stream delegate would not fires until about 150 bytes.
Will polling help?
EDIT:
Also I found it hard to recover the session after the app switching back from background to foreground. Trying to open session again would fail. Any idea?
Read every bytes when NSStreamEventHasBytesAvailable arrives.
Did you develop your own Bluetooth accessory? May be the MCU only flushes after sending every 150 bytes.
Also you mentioned initial transmission. Do you know once the Bluetooth device is paired and connected to iPhone, it has to go through some identification process, handshaking some secret certificate. This may take few and even 10 seconds, depending on signal quality. This may be the cause of delay.