Does anyone know of any recent examples of receiving data through the iPhone's audio jack? Thanks!
Not an example but a method:
I would think the data is exchanged with different audio tones, perhaps FSK or another tone encoding scheme such as DTMF.
Related
I was wondering if there was a way to store the data in a custom audio worklet for further processing on the client side, ie turning it into a WAV file? I've seen that it's possible to output an audio stream to a MediaRecorder, but that results in the creation of lossy audio via the ogg codec. If possible, I would like access to the raw PCM data from the worklet processor so I can encode it as WAV or another lossless format.
My hunch is that this can be accomplished by attaching something to the global audio scope and retrieving it from the audio context, but I'm not sure. Help would be appreciated!
Answering own question, I see that there is now the possibility to use PCM as a codec, ie in https://github.com/muaz-khan/RecordRTC/. This is unfortunately undocumented in most major web audio documentation, but as it exists in several modern browsers, it is good enough for my needs!
I am developing an iphone app, which sends/receives data to audio/headphone jack of iphone. I assume we can send/receive data to headphone jack but that data is audio file with some codec applied. I want to read an audio file and send raw data to headphone jack.. How can i do that..? Any help or code snippet appreciated.
Best regards,
Abdul Qavi
Since the headphone jack is on the other side of a couple of D>A converters, I guess you'd need a codec that converts to modem tones. Hope you files are really small 'cos transfer rate would be pretty slow, even if you could figure out how to use both channels.
Just 'tooth 'em.
Rgds,
Martin
Some days ago I saw a interesting device for iphone, square, here: https://squareup.com/
you can plug it into iphone's earphone socket, and it can transfer data to iphone. A running App on iphone can receive it.
does any one know how it implemented? I guess it can encode data to audio stream and "sing" it, and App on phone can record the sound and decode it. but how to? is there a protocol or SDK?
The implemention is likely to be no different to that of a simple acoustic modem. The relevant APIs include Audio Units (low-level) or Audio Queue Services (higher level).
Matt Gallagher has written an excellent (as always!) post on creating an iOS tone generator, which is one way of enabling what you are after.
I'm currently working on a project where it is necessary to record sound being played by the iPhone. By this, I mean recording sound being played in the background like a sound clip or whatever, NOT using the built-in microphone.
Can this be done? I am currently experimenting with the AVAudioRecorder but this only captures sound with the built-in microphone.
Any help would be appreciated!
This is possible only when using only the Audio Unit RemoteIO API or only the Audio Queue API with uncompressed raw audio, and with no background audio mixed in. Then you have full access to the audio samples, and can queue them up to be saved in a file.
It is not possible to record sound output of the device itself using any of the other public audio APIs.
Just to elaborate on hotpaw2's answer, if you are responsible for generating the sound then you can retrieve it. But if you are not, you cannot. You only have any control over sounds in your process. yes, you can choose to stifle sounds coming from different processes. but you can't actually get the data for these sounds or process them in any way.
Anyone know if this is allowed by the iPhone's various Api's or even if Apple allows this?
Example: Plug something in the audio jack and use it as a "taser" (this is just a hypothetical/proof-of-concept example).
Yes. The standard way of "sending electrical signals through the audio jack" is
known as "playing audio", and I'm pretty sure this is possible on the iPhone.