Calibration of a soundcard in MATLAB - matlab

I have designed a GUI to calibrate my sound card using MATLAB, I am able to record my input signal. I would like to calibrate my input.
How do I do that?
My GUI should be capable to adapt to different sound cards and get the dBV values, hence the Calibration is required. Any help would be appreciated.

A: This is a task from a Metrology, rather than from a programming area
To get the job done, you need a fully-controlled-environment to re-run a defined-input/known-output experiment.
In principle,
your both all your devices and your setup, has to be controlled - i.e.
your MIC-Input-accoustic/electric converter, while [dBa] -> [V] conversion is
"readable" down the cable path, it is not a principally important value per-se,
your CABLE-wire-path, which shall not be either neglected or forgotten,
your SND-Card-A/D converter,
your AUDIO-pre-Calibration Sound-Sample,
your TEST-pre-Calibration Environment
so as to be able to pre-Calibrate your devices for measurments.
The calibration itself can be achieved right by using the same AUDIO Sound-Sample in the same TEST Environment and be that measured / calibrated / by another device, that was certified at a locally recognised reference Authority to have a certain level of precision ( a guarantee that it's readings will not be outside a natl./intl. recognised precision class' envelope from correct/exact values ).
Note: you may want to pre-Calibrate your MIC+SND-A/D setup inside your in-vitro controlled environment specifically across a wide range of frequencies, so as to avoid frequency-dependent variation of the measurement-conversion path. Thus your pre-Calibration would have sort of Calibration-curve as an input for your further tests to be performed in-vivo

Related

Switch between two flanges

I am currently working with multibody mechanical systems using the MultiBody library included in the standard Modelica distribution.
I need to implement a switch between flanges, in order to select position or force control for a given joint.
model FlangeSwitch "Switch between flanges"
Modelica.Mechanics.Translational.Interfaces.Flange_a flange_a_1;
Modelica.Mechanics.Translational.Interfaces.Flange_b flange_b_1;
Modelica.Mechanics.Translational.Interfaces.Flange_a flange_a_2;
Modelica.Mechanics.Translational.Interfaces.Flange_b flange_b_2;
Modelica.Mechanics.Translational.Interfaces.Flange_a flange_a_exit;
Modelica.Mechanics.Translational.Interfaces.Flange_b flange_b_exit;
Modelica.Blocks.Interfaces.BooleanInput u;
equation
if u then
flange_a_exit = flange_a_2;
flange_b_exit = flange_b_2;
else
flange_a_exit = flange_a_1;
flange_b_exit = flange_b_1;
end if;
end FlangeSwitch;
But this approach does not work, the system is not balanced: 10 equations and 12 variables.
Is there any way to do this?
I don't think a Modelica tool will allow this operation (even if you have a balanced model), as it would potentially result in a variable structure system. Which is something Modelica does not support at the moment. See a nice introduction here: https://www.modelica.org/events/modelica2017/proceedings/html/submissions/ecp17132291_Stuber.pdf
Without fully knowing the application you could try two approaches:
Use a model that emulates a rotational clutch, like the Modelica.Mechanics.Translational.Components.Brake with an activated parameter useSupport. This way you can generate a "controllable mechanical connection" for connecting either of the flanges to the support connector. If I read your code correctly you should connect flange_a_2 to the support and the flange_a_exit to either flange_a or flange_b. When activating the brake via the RealInput there will be a mechanical connection.
The second thing you can try is to measure either position or force (which of both you want to apply by a sensor Modelica.Mechanics.Translational.Sensors.PositionSensor and then apply it using the respective source, which in this case would be Modelica.Mechanics.Translational.Sources.Position. Switching between the sources could then be done by switching the Real signals instead of the physical connectors. Mind that is could generate jumps in positions when applying positions directly.
The link you posted is related to non-phyiscal connectors, which are less restrictive compared to the physical connectors. So comparing the two solutions should be done very carefully.
Switching from position as an input to force as an input would require the system of equations to be rebuilt when executing this switch. This will not be possible with current generation Modelica. You will need to find a solution that is based on the same input for the whole simulation.
Would it be enough to initialize position in a way that the system starts the simulation in the point where you want to move it to first (using the Position Source)? What you loose is the movement of the system to this position.

Using a subset of a SUMO scenario for OMNeT++ network simulation (with VEINS)

I'm trying to evaluate an application that runs on a vehicular network using OMNeT++, Veins and SUMO. Because the application relies on realistic traffic behavior, so I decided to use the LuST Scenario, which seems to be the state of the art for such data. However, I'd like to use specific parts of this scenario instead of the entire scenario (e.g., a high and a low traffic load fragment, perhaps others). It'd be nice to keep the bidirectional functionality that VEINS offers, although I'm mostly interested in getting traffic data from SUMO into my simulation.
One obvious way to implement this would be to use a warm-up period. However, I'm wondering if there is a more efficient way -- simulating 8 hours of traffic just to get a several-minute fragment feels inefficient and may be problematic for simulations with sufficient repetitions.
Does VEINS have a built-in mechanism for warm-up periods, primarily one that avoids sending messages (which is by far the most time consuming part in the simulation), or does it have a way to wait for SUMO to advance, e.g., to a specific time stamp (which also avoids creating vehicle objects in OMNeT++ and thus all the initiation code)?
In case it's relevant -- I'm using the latest stable versions of OMNeT++ and SUMO (OMNeT++ 4.6 with SUMO 0.25.0) and my code base is based on VEINS 4a2 (with some changes, notably accepting the TraCI API version 10).
There are two things you can do here for reducing the number of sent messages in Veins:
Use the OMNeT++ Warm-Up Period as described here in the manual. Basically it means to set warmup-period in your .ini file and make sure your code checks this with if (simTime() >= simulation.getWarmupPeriod()). The OMNeT++ signals for result collection are aware of this.
The TraCIScenarioManager offers a variable double firstStepAt #unit("s") which you can use to delay the start of it. Again this can be set in the .ini file.
As the VEINS FAQ states, the TraCIScenarioManagerLaunchd offers two variables to configure the region of interest, based on rectangles or roads (string roiRoads and string roiRects). To reduce the simulated area, you can restrict simulation to a specific rectangle; for example, *.manager.rioRects="1000,1000-3000,3000" simulates a 2x2km area between the two supplied coordinates.
With both solutions (best used in combination) you still have to run SUMO - but Veins barely consums any of the time.

Micromaster 440. Ways to limit output frequency on the run?

I need to control a conveyor (driven by Micromaster 440) from a PC program using SFC14/15.
The scheme will be: Supervisors PC ->(ethernet)-> S7-1200 ->(profibus)-> Micromaster 440.
At the moment, Micromaster's output frequency is controlled via a potentiometer (analog inputs) by the "field" operator. The problem is that sometimes the operator increases the conveyor speed in order to do his job faster and this affects the production negatively. The "supervisor" wants to be able to limit output frequency using the PC program.
Of course I've seen the list of MM440 parameters and I know about P1082, but I've discovered that, unfortunately, MM440 should be stopped before the new value of P1082 takes effect. In my case it's preferable to be able to change the value on the run.
Fortunately, it seems that P0757 - P0760 - (input scaling) can be changed on the run, but this parameter has sign "first confirm", which means that
the ā€œPā€ button on the operator panel (BOP or AOP) must be pressed before the
changes take effect.
But the MM440 has only one slot for the Profibus/BOP/AOP panel and I'll be using Profibus. So, in this case, what will be the behavior of mm440 like? I want to believe that, perhaps, this condition is not obligatory when using profibus panel...
I would opt for a solution where the operator no longer operates the belt speed directly but tells the S7-1200 PLC a what speed he would like the belt to run (either by using 2 +/- buttons or a pot-meter). The PLC can then control the speed of the belt (either by analog output or 2 digital (+/-) outputs).
As an added bonus you can stop the belt when it is accidentally left on and things like that...

How to retrieve signal quality measures in iPhone?

In particular I would like to retrieve:
1. RSSI (received signal strength indicator)
2. RSCP (signal level),
3. SC (Scrambling Code) and
4. EcNo (Signal To Noise Ratio)
Which API function from iPhone SDK can help me to retrieve these values.
Further to your comment above, there is also a GetSignalStrength function referenced among the private functions here.
But if you use one of these GetSignalStrength functions how do you know what you are really getting?
I can't find any documentation, but I would question the assumption that it will always be RSSI.
There is no standard for calculating the number of bars that are shown on a screen. However, there is a standard for calculating the network strength, when the mobile phone decides whether or not to move over to another cell.
For GSM, this standard is RSSI.
For UMTS, it is CPICH RSCP.
For LTE, it is RSRP.
Therefore, if you have 1 single function, that purports to return RSSI in all cases, I ask myself whether it will actually return RSCP when on a UMTS network, and RSRP when on an LTE network. In other words, is it a fudge that over-simplifies the true case?
The 3GPP AT command AT+CESQ (defined here) retrieves network strength. It has parameters that allow for any of the three network types, and you would expect that if you are currently registered on a UMTS cell (for example), that it would return UMTS parameters only. But I can't see any evidence of an equivalent way to get all the data across iPhone APIs.
The next obvious question to ask would be "Can I use that AT command on the iPhone?" Someone has asked that on StackOverflow here. I don't know if AT+CESQ is supported on the iPhone.

Transferring data using ultrasound

Yamaha InfoSound and ShopKick application use technologies that allow to transfer data using ultrasound. That is playing an inaudible signal (>18kHz) that can be picked up by modern mobile phones (iOS, Android).
What is the approach used in such technologies? What kind of modulation they use?
I see several problems with this approach. First, 18kHz is not inaudible. Many people cannot hear it, especially as they age, but I know I certainly can (I do regular hearing tests, work-related). Also, most phones have different low-pass filters on their A/D converters, and many devices, especially older Android ones (I've personally seen that happen), filter everything below 16 kHz or so. Your app therefore is not guaranteed to work on any hardware. The iPhone should probably be able to do it.
In terms of modulation, it could be anything really, but I would definitely rule out AM. Sound has next to zero robustness when it comes to volume. If I were to implement something like that, I would go with FSK. I would think that PSK would fail due to acoustic reflections and such. The difficulty is that you're working with non-robust energy transfer within a very narrow bandwidth. I certainly do not doubt that it can be achieved, but I don't see something like this proving reliable. Just IMHO, that is.
Update: Now that i think about it, a plain on-off would work with a single tone if you're not transferring any data, just some short signals.
Can't say for Yamaha InfoSound and ShopKick, but what we used in our project was a variation of frequency modulation: the frequency of the carrier is modulated by a digital binary signal, where 0 and 1 correspond to 17 kHz and 18 kHz respectively. As for demodulator, we tried heterodyne. More details you could find here: http://rnd.azoft.com/mobile-app-transering-data-using-ultrasound/
There's nothing special in being ultrasound, the principle is the same as data transmission through a modem, so any digital modulation is -in principle- feasible. You only have a specific frequency band (above 18khz) and some practical requisites (the medium is very unreliable, I guess) that suggest to use a simple-robust scheme with low-bit rate.
I don't know how they do it but this is how I do it:
If it is a string then make sure it's not a long one (the longer the higher is the error probability ). Lets assume we're working with the vital part of the ASCII code, namely up to character number 127, then all you need is 7 bits per character. Transform this character into bits and modulate those bits using QFSK (there are several modulations to choose from, frequency shift based ones have turned out to be the most robust I've tried from the conventional ones... I've created my own modulation scheme for this use case). Select the carrier frequencies as 18.5,19,19.5, and 20 kHz (if you want to be mathematically strict in your design, select frequency values that assure you both orthogonality and phase continuity at symbol transitions, if you can't, a good workaround to avoid abrupt symbols transitions is to multiply your symbols by a window of the same size, eg. a Gaussian or Bartlet ). In my experience you can move this values in the range from 17.5 to 20.5 kHz (if you go lower it will start to bother people using your app, if you go higher the average type microphone frequency response will attenuate your transmission and induce unwanted errors).
On the receiver side implement a correlation or matched filter receiver (an FFT receiver works as well, specially a zero padded one but it might be a little bit slower, I wouldn't recommend Goertzel because frequency shift due to Doppler effect or speaker-microphone non-linearities could affect your reception). Once you have received the bit stream make characters with them and you will recover your message
If you face too many broadcasting errors, try selecting a higher amount of samples per symbol or band-pass filtering each frequency value before giving them to the demodulator, using an error correction code such as BCH or Reed Solomon is sometimes the only way to assure an error free communication.
One topic everybody always forgets to talk about is synchronization (to know on the receiver side when the transmission has begun), you have to be creative here and make a lot a tests with a lot of phones before you can derive an actual detection threshold that works on all, notice that this might also be distance dependent
If you are unfamiliar with these subjects I would recommend a couple of great books:
Digital Modulation Techniques from Fuqin Xiong
DIGITAL COMMUNICATIONS Fundamentals and Applications from BERNARD SKLAR
Digital Communications from John G. Proakis
You might have luck with a library I created for sound-based modems, libquiet. It gives you a handful of profiles to work from, including a slow "Ultrasonic whisper" profile with spectral content above 19kHz. The library is written in C but would require some work to interface with iOS.