What's the difference between module-remap-source and module-virtual-source in PulseAudio - pulseaudio

If I run the following command, I get a "virtual microphone" that's hooked up to a sink called "MicOutput". If I send data to "MicOutput", that data is then sent to the virtual microphone.
pactl load-module module-null-sink sink_name=MicOutput sink_properties=device.description="MicOutput"
pacmd load-module module-virtual-source source_name=VirtualMic master=MicOutput.monitor
I can get similar behavior if I replace the second line with:
pactl load-module module-remap-source source_name=Remap-Source master=MicOutput.monitor
The main difference I see is that the latency is lower.
But what's the difference? When would I want to use one, or the other?
My Research so far
I see these two files:
https://fossies.org/linux/pulseaudio/src/modules/module-remap-source.c (added in 2013)
https://fossies.org/linux/pulseaudio/src/modules/module-virtual-source.c (added in 2010)
Perhaps if I looked at the code hard enough I'd understand the answer. I wonder if someone happens to know the answer though?

module-virtual-source is not typically used. It's an example of how a "filter source" should be implemented.
Module-remap-source has much less overhead
Source: I asked the PulseAudio team. https://lists.freedesktop.org/archives/pulseaudio-discuss/2022-April/032260.html

Related

music21: get the voice/program/instrument of midi voice from a flat score?

I have a simple script that uses music21 to process the notes in a midi file:
import music21
score = music21.converter.parse('www.vgmusic.com/music/console/nintendo/nes/zanac1a.mid')
for i in score.flat.notes:
print(i.offset, i.quarterLength, i.pitch.midi)
Is there a way to also obtain a note's voicing / midi program using a flat score? Any pointers would be appreciated!
MIDI channels and programs are stored on Instrument instances, so use getContextByClass(instrument.Instrument) to find the closest Instrument in the stream, and then access its .midiProgram.
Be careful:
.midiChannel and .midiProgram are 0-indexed, so MIDI channel 10 will be 9 in music21, etc., (we're discussing changing this behavior in the next release)
Some information might be missing if you're not running the bleeding edge version (we merged a patch yesterday on this topic), so I advise pulling from git: pip install git+https://github.com/cuthbertLab/music21
.flat is going to kill you, though, if the file is multitrack. If you follow my advice you'll just get the last instrument on every track. 90% of the time people doing .flat actually want .recurse().

Syncing of buffer-transmission with ESP32, I2S MEMS-mic and SD-card (FreeRTOS, PlatformIO, ESP-PROG)

i know this forum dislikes "open" questions like this, nevertheless i'd like somebody to help untie the knot in my head, much appreciated.
The goal is simple:
read a stereo 32bit 44100 S/s I2S signal from 2 adafruit sph0645 mics
create a wav-header and store the data onto an SD-card
I've been at this for a few days now and i know that this will be much more complicated than i originally thought. Main reason: signal quality. Like most tutorials on this subject the simplest "hello world" for these mics is a looped polling for I2S-samples. Poll, fill buffer, output via serial or write to SD-card. This returns a choppy, noisy, sped up version of RL-audio. The filling of the internal DMA-buffers can be seen as constant, but the rest is mostly chaos, so
how to i sync these DMA-buffers with the rest of my code?
From experience with the STM32 HAL i'd imagine some register which can be set to throw an interrupt whenever a buffer is full, or an event which can be sent between tasks via queues. Examples on this subject either poll in a main loop with mono an abysmal sample-rate and bit depth or use pages of overkill code and never adress what it does, "just copy and it works", not good. Does the ESP32-Arduino framework provide some way to to this properly? The espressif-documentation isn't something to look forward to, since some of their I2S interface functions don't even work (if you are researching this topic as well, you too might have noticed that i2s_read only returns zeros). Just a hint into the right direction would help, i'm writing my own code anyway. Interrupts? Events? Timers? Polling for full buffers? Only you might know.
have a good one, thx
Thanks to https://github.com/atomic14/ i now have an answer for a syncing-method which works very well. This method has been tried by https://esp32.com/viewtopic.php?t=12546 who also didn't fully understand what was going on: the espressif i2s-interface offers a flag stored in an event which is triggererd every time one of the specified dma-buffers has received a full set of data, ergo, is full. It looks like this:
while(<your condition>){
i2s_event_t evt;
if (xQueueReceive(<your queue>, &evt, portMAX_DELAY) == pdPASS){
if (evt.type == I2S_EVENT_RX_DONE){
size_t bytesRead = 0;
do{
//read data via i2s_read or i2s_read_bytes
} while (bytesRead > 0);
No data is stored in this queue, but rather a flag which can then be used to synchronize dma-filling and further buffering/calculating/sending the read data.
HOWEVER this only works if you install the i2s driver in a specific setup. Instead of using
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
in your setup, you can activate the "affinity" for events by passing a queue-handle and a lenght:
i2s_driver_install(I2S_NUM_0, &i2s_config, 4, &<your queue>);
hope this helps getting started, it sure did help me.

Filtering an audio signal and then reading the meter without sending it to master

I'm trying to filter a signal and then analyse the values of the filtered signal using Tone.js / Web-Audio API.
I'm expecting to get values of the filtered signal, but I only get -Infinity, meaning that my connections between the nodes are wrong. I've made a small fiddle demonstrating this, however in my use-case I do not want to send this node to the destination of the context - I only want to analyse it, not hear it.
osc.connect(filter)
filter.connect(gainNode)
gainNode.connect(meter)
console.log(meter.getLevel())
I guess you tested the code in Chrome because there is a problem with Chrome which causes it to not process anything until it is connected to the destination. When using Tone.js that means you need to call .toMaster() at the end of your chain. I updated you fiddle to make it work: https://jsfiddle.net/8f7abzoL/.
In Firefox calling .toMaster() is not necessary therefore the following works in Firefox as well: https://jsfiddle.net/yrjgfdtz/.
After some digging I've found out that I need to have a scriptProcessorNode - which is apparently no longer recommended - so looking into Audio Worklet Nodes

iRobot Create not returning sensor data

I am trying to stream sensor data from the iRobot Create. I get tuple out of range errors when I try
bot.stream_sensors(somenumber) and bot.poll_sensors(somenumbers). Whenever I input bot.sensors, I just get an empty array {}. I have even tried sending bot.sensors while pushing in on the bump sensor, still getting an empty array. I am connected to the bot through the Serial port with a serial-to-usb converter on my side. The only code before trying to get the sensor data is
import openinterface
bot = openinterface.CreateBot(com_port="/dev/ttyUSB0", mode="full")
Does anyone have an idea of how to solve this issue? Everywhere else just uses stream_sensors(6) and it seems to work fine.
P.S. I posted a question similar to this topic not too long ago, but no one responded. Not trying to spam, but now I have a more clear question and what the apparent-problem is so I thought I would try again.
I downloaded openinterface.py from this site: which included some sample programs. I'd suggest you take a step back, try the sample code, try to find other, more sophisticated, sample code and play with that first before moving on to your real code. You may be missing a step somewhere.
I may be a bit late to answer this, but for reference purposes. Directly controlling the iRobot is simplified greatly by using
Pyrobot.

Getting the IO count

I am using xen hypervisor. I am trying to get the IO count of the VMs running on top of the xen hypervisor. Can someone suggest me some way or tool to get the IO count ? I tried using xenmon and virt-top. Virt-top doesnt give any value and xenmon always shows 0. Any suggestions to get the number of read or write calls made by a VM or the read and write(Block IO) bandwidth of a particular VM. Thanks !
Regards,
Sethu
You can read this directly from sysfs on most systems. You want to open the following directory:
/sys/devices/xen-backend
And look for directories starting with vbd-
The nomenclature is:
vbd-{domain_id}-{vbd_id}/statistics
Inside, you'll find what you need, which is:
br_req - Number of block read requests
oo_req - Number of 'out of' requests (no room left in list to service any given request)
rd_req - Number of read requests
rd_sect - Number of sectors read
wr_sect - Number of sectors written
The br_req will be an aggregate count of things like write barriers, aborts, etc.
Note, for this to work, The kernel has to be told to export Xen attributes via sysfs, but most Xen packages have this enabled. Additionally, the location in sysfs might be different with earlier versions of Xen.
have you tried xentop?
There is also bwm-ng (check your distro). It shows block utilization per disk (real/virtual). If you know the name of the virtual disk attached to the VM, then you can use bwm-ng to get those stats.