Transmit data using sound and recover it on the other PC in MATLAB - matlab

I want to transmit data from a PC to another PC using sound. I am trying PSK Modulation but when I play sound of my output signal and record it on the other one, It is very distorted and I can't even notice the phase changes. Am I missing something?
For further details, I want to make a wireless calculator that takes input on a computer and then transmits it using sound waves to the other PC where the results are calculated and the results are transmitted back.

Related

How to get real time audio data from raspberry pi to simulink and apply real time signal processing

I am facing a problem with getting live (real time) audio data from raspberry pi to simulink.
I want to get the data continuously from the microphone that is connected the raspberry pi and process it in simulink continuously too.
I am using the ALSA Audio Capture block but its not working for me.
Anyone tried this before and may help please?
Thank you.

Transmission of live feed from webcam over RF and receive same feed from different pluto

Is there any way we can transmit live feed from Rasp Webcam / Webcam and transmit over channel(RF) with Software defined radio, and receive feed from somewhere else ?
I tried to connect everythink in matlab simulink but the delay is totally unacceptable. is there any way we can transmit that over RF ?

How could 2 more than audio program speaks out simultaneously on a speaker?

I've got a question While I'm studying in operating system.
As I learned, Operating System is a kind of resource manager and audio program works in my PC will use the speaker as a resource. So Audio program will use allocated speaker by OS.
When I executes more than two processes of audio program in PC, Sounds of them come out from speaker simultaneously.
I wonder what is the mechanism for this. Are they processes hold & release the resource when they are in running & ready state? Or the Processes share the resource by OS?
Multiple sounds can be mixed together additively. For software, this mostly means a small buffer of samples where you add (with saturation) the samples from 2 or more streams of digitized audio before sending the result to the speaker/s. Of course sound cards are also likely to be capable of doing this mixing themselves (with some hardware specific limit on the max. number of streams that can be handled).
For the "PC speaker" there's no support for digitized sound or much else (it only supports "one fixed frequency tone at a time"). If that's what you're asking about then you can (with a relatively high amount of overhead) use pulse-width modulation (using a pair of timers) to force it to play digitized sound and still do the mixing in software. Alternatively you can nerf the audio such that only one tone occurs at a time (e.g. if there's 2, pick the highest frequency tone or make one wait until the other is finished).

No sound output when running jsyn on raspberry pi (Raspbian Jessie)

I've started writing a java synthesizer using the jsyn library. It works well in windows and osx, but when I run it on Raspbian. When starting the program i notice some activity in the headphone output though, it starts to output some silent noise, but no clear loud sawtooth wave like it does on Windows and OSX. Which sound device is the correct one to select as output when starting the synthesizer if I want to use the headphone jack? There are 4 avaliable when I run the AudioDeviceManager.getDeviceCount()
It is hard to know which of the 4 devices to use. This example will list them by name and also indicate which one is the default input or output.
https://github.com/philburk/jsyn/blob/master/tests/com/jsyn/examples/ListAudioDevices.java
It is also possible that the CPU cannot keep up. Try just playing a single oscillator. A sine wave is good because then you can easily hear any click or distortion. Here is an example that does that:
https://github.com/philburk/jsyn/blob/master/tests/com/jsyn/examples/PlayTone.java

Getting the voltage applied to an iPhone's microphone port

I am looking at a project where we send information from a peripheral device to an iPhone through the microphone input.
In a nutshell, the iPhone would act as a voltmeter. (In reality, the controller we developed will send data encoded as voltages to the iPhone for further processing).
As there are several voltmeter apps on the AppStore that receive their input through the microphone port, this seems to be technically possible.
However, scanning the AudioQueue and AudioFile APIs, there doesn't seem to be a method for directly accessing the voltage.
Does anyone know of APIs, sample code or literature that would allow me to access the voltage information?
The A/D converter on the line-in is a voltmeter, for all practical purposes. The sample values map to the voltage applied at the time the sample was taken. You'll need to do your own testing to figure out what voltages correspond to the various sample values.
As far as I know, it won't be possible to get the voltages directly; you'll have to figure out how to convert them to equivalent 'sounds' such that the iOS APIs will pick them up as sounds, which you can interpret as voltages in your app.
If I were attempting this sort of app, I would hook up some test voltages to the input (very small ones!), capture the sound and then see what it looks like. A bit of reverse engineering should get you to the point where you can interpret the inputs correctly.