Watermarking sound, reading through iPhone - iphone

I want to add a few bytes of data to a sound file (for example a song). The sound file will be transmitted via radio to a received who uses for example the iPhone microphone to pick up the sound, and an application will show the original bytes of data. Preferably it should not be hearable for humans.
What is such technology called? Are there any applications that can do this?
Libraries/apps that can be used on iPhone?

It's audio steganography. There are algorithms to do it. Refer to here.

I've done some research, and it seems the way to go is:
Use low audio frequencies.
Spread the "bits" around randomly - do not use a pattern as it will be picked up by the listener. "White noise" is a good clue. The random pattern is known by the sender and receiver.
Use Fourier transform to pick up frequency and amplitude
Clean up input data.
Use checksum/redundancy-algorithms to compensate for loss.
I'm writing a prototype and am having a bit difficulty in picking up the right frequency as if has a ~4 Hz offset (100 Hz becomes 96.x Hz when played and picked up by the microphone).
This is not the answer, but I hope it helps.

Related

Different length of sound files with different sampling frequencies

Im currently struggling to understand what is happening. So, I created a sound using the audiowrite function in Matlab (the sound is created using two different sounds but I dont think it matters) first with a sampling frequency of 44100 Hz, and another one, the sound file is the same but the sampling frequency is 48000 Hz. Now I'm observing that the sound produced at 44100Hz is approx. 30sec longer than the other one (48000Hz sampling). It looks like phase shifting of some sort, but I'm not sure. Any help/explanation is appreciated. I also made a amplitude/time plot for better understanding:
(I set the x axis to 350sec to see where the signal ends).
EDIT: here is the code for how I create the sound file:
[y1,F1] = audioread(cave_file); %cave and forest files are mp3 files loaded earlier both have samp.freq of 48000Hz
[y2,F2] = audioread(forest_file);
samp_freq=44100;
%samp_freq=48000;
a = max(size(y1),size(y2));
z = [[y1;zeros(abs([a(1),0]-size(y1)))],[y2;zeros(abs([a(1),0]- size(y2)))]]
audiowrite('test_sound.wav', z,samp_freq);
What is the storage format? More specifically, is the info about sampling rate and number of channels stored in file meta data? which is then used during playback.
If so, then there are 3 possibilities for this behavior:
1) The sampling rate meta data of the 44.1KHz file is incorrect, while the audio was sampled at the correct rate i.e. 44.1KHz. Because the 44.1KHz file is playing longer than 48KHz, which I'm assuming to be producing the correct sound, and playing for the correct duration, it can be concluded that the sampling rate meta data of 44.1KHz is much lesser than 44.1KHz.
Could you please check the meta data? or attach the files here so that I can try to take a look?
2) The sampling didn't happen at the correct rate, while the meta data has 44.1KHz as the sampling rate.
3) The number of channels is incorrectly stored.
In case the files are raw PCMs, then this probably the correct sampling rate and/or number of channels is not selected when playing the 44.1KHz file.
Hope this helps

How is it possible to encode black/white picture into ".wav"-file?

How is it possible to encode black/white picture into ".wav"-file? I know that it is possible for sure with help of "stenography". But I don't know it's algorithms. What algorithms exist? And what books/sources are the best for understanding of their principles?
Edited:
Actually I have stereo wav-file. My task is to decode pictures from it. The task says, that frequencies of the left channel show the X-coordinate, frequencies of the right channel show the Y-coordinate of Cartesian coordinate system. These points compose the picture with the text-message. So, I must to write programm for this. I haven't any idea what should I do.
Probably the simplest version of steganography using a wav file would be to use 16-bit samples in the wave file, but only dedicate the 15 most significant bits to sound. In the least significant bit of each sample, you'd encode one pixel of your black and white picture.
Regenerating the picture would require software to open the wave file, take the least significant bit from each sample, and put those bits back together with each other into (for example) a JPEG file.
To put things into perspective, a CD has two channels containing 16 bit samples at a rate of 44.1 KHz, so you'd only need the LSBs from around 10 seconds of sound to encode a fairly typical full-color JPEG (e.g., 100KB or so). A wave file of a typical ~3 minute pop song could hide around 15-20 full-color pictures pretty easily.
Edit: (to reply to edited answer). This is a little tougher to deal with. An individual sample can't represent any frequency; it just represents the amplitude at a given point in time. To get frequency, you need a number of samples over a period of time -- and you need to know the exact period to convert.
Once you know that, you basically do an FFT on the samples. That will tell you the relative strengths of signal at all possible frequencies. Presumably, you'd pick the strongest one and scale appropriately. Do the same for the other channel and draw a pixel at that point.
Your ears are not sensitive to small changes in sound file.
Wav files are UNCOMPRESSED data so its just a file of 16-24bit characters. Your ears cannot notice slight differences betweeen bits. All you need to do is periodically inject bit values that represent an image in the data.
So if you insert one pixel for every 1000 data points you can hide an image (without even encrypting it) in a wave file. If a user plays the file they CANNOT hear it.
When you save the file on your computer or computer afar you can use a decoding tool that is aware of the hiding techinque.

How to export sound from timeline of sounds on iOS with OpenAL

I'm not sure if it's possible to achieve what I want, but basically I have a NSDictionary which represents a recording. It's a timeline of what sound id was played at what point in time.
I have it so that you can play back this timeline/recording, and it works perfectly.
I'm wondering if there is anyway to take this timeline, and export it as a single sound that could be saved to a computer if the device was synced with iTunes.
So basically I'm asking if I can take a timeline of sounds, play it back and have these sounds stitched together as a single sound, that can then be exported.
I'm using OpenAL as my sound framework and the sound files are all CAFs.
Any help or guidance is appreciated.
Thanks!
You will need:
A good understanding of linear PCM audio format (See Wikipedia's Linear PCM page).
A good understanding of audio sample-rates and some basic maths to convert your timings into sample-offsets.
An awareness of how two's-complement binary numbers (signed/unsigned, 16-bit, 32-bit, etc.) are stored in computers, and how the endian-ness of a processor affects this.
Patience, interest in learning, and a strong desire to get this working.
Here's what to do:
Enable file sharing in your app (UIFileSharingEnabled=YES in info.plist and write files to /Documents directory).
Render the used sounds into memory buffers containing linear PCM audio data (if they are not already, i.e. if they are compressed). You can do this using the offline rendering functionality of Audio Queues (see Apple audio queue docs). It will make things a lot easier if you render them all to the same PCM format and sample rate (For example 16-bit signed samples #44,100Hz, I'll use this format for all examples), and use the same format for your output. I recommend starting off with a Mono format then adding stereo once you get it working.
Choose an uncompressed output format and mix your sounds into a single stream:
3.1. Allocate a buffer large enough, or open a file stream to write to.
3.2. Write out any headers (for example if using WAV format output instead of raw PCM) and write zeros (or the mid-point of your sample range if not using a signed sample format) for any initial silence before your first sound starts. For example if you want 0.1 seconds silence before your first sound, write 4410 (0.1 * 44100) zero-samples i.e. write 4410 shorts (16-bit) all with zero.
3.3. Now keep track of all 'currently playing' sounds and mix them together. Start with an empty list of 'currently playing sounds and keep track of the 'current time' of the sample you are mixing, for each sample you write out increment the 'current time' by 1.0/sample_rate. When it gets time for another sound to start, add it to the 'currently playing' list with a sample offset of 0. Now to do the mixing, you iterate through all of the 'currently playing' sounds and add together their current sample, then increment the sample offset for each of them. Write the summed value into the output buffer. For example if soundA starts at 0.1 seconds (after the silence) and soundB starts at 0.2 seconds, you will be doing the equivalent of output[8820] = soundA[4410] + soundB[0]; for sample 8820 and then output[8821] = soundA[4411] + soundB[1]; for sample 8821, etc. As a sound ends (you get to the end of its samples) simply remove it from the 'currently playing' list and keep going until the end of your audio data.
3.4. The simple mixing (sum of samples) described above does have some problems. For example if two samples have values that add up to a number larger than 32767, this cannot be stored in a signed-16-bit number, this is called clipping. For now, just clamp the value to 32767, and get it working... later on come back and implement a simple limiter (see description at end).
Now that you have a mixed version of your track in an uncompressed linear PCM format, that might be enough, so write it to /Documents. If you want to write it in a compressed format, you will need to get the source for an audio encoder and run your linear PCM output through that.
Simple limiter:
Let's choose to limit the top 10% of the sample range, so if the absolute value is greater than 29490 (int limitBegin = (int)(32767 * 0.9f);) we will scale down the value. The maximum possible peak would be int maxSampleValue = 32767 * numPlayingSounds; and we want to scale values above limitBegin to peak at 32767. So do the summation into sampleValue as per the very simple mixer described above, then:
if(sampleValue > limitBegin)
{
float overLimit = (sampleValue - limitBegin) / (float)(maxSampleValue - limitBegin);
sampleValue = limitBegin + (int)(overLimit * (32767 - limitBegin));
}
If you're paying attention, you will have noticed that when numPlayingSounds changes (for example when a new sound starts), the limiter becomes more (or less) harsh and this may result in abrupt volume changes (within the limited range) to accommodate the extra sound. You can use the maximum number of playing sounds instead, or devise some clever way to ramp up the limiter over a few milliseconds.
Remember that this is operating on the absolute value of sampleValue (which may be negative in signed formats), so the code here is just to demonstrate the idea. You'll need to write it properly to handle limiting at both ends (peak and trough) of your sample range. Also, there are some tricks you can do to optimize all of the above during the mixing - you will probably spot these while you're writing the mixer, be careful and get it working first, then go back and refactor/optimize if needed.
Also remember to consider the endian-ness of the platform you are using and the file-format you are writing to, as you may need to do some byte-swapping.
One approach which isn't too hard if your files are stored in a simple format is just to combine them together manually. That is, create a new file with the caf format and manually put together the pieces you want.
This will be really easy if the sounds are uncompressed (linear PCM). But, read the documents on the caf file format here:
http://developer.apple.com/library/mac/#documentation/MusicAudio/Reference/CAFSpec/CAF_spec/CAF_spec.html#//apple_ref/doc/uid/TP40001862-CH210-SW1

Alloc and init buffer and play DTMF

I want to allocate a memory buffer and initialize it with data of a mathematical equation in order to gain a pure DTMF tone. I am using the AudioQueueServices library to allocate and fill the buffer. I used a formula of 2 sine waves and 2 different frequencies. However, neither a sound nor a tone is not played.
It may be important to mention that the function of AudioPlayer: initWithData:error:
You haven't really given enough information to diagnose your problem. The only obvious question to ask is whether you have setup your audio session?
A good sample to use as a reference is Dave Dribin's A440 sample from iPadDevCamp Chicago. It shows how to play a simple 440 hz tone using both AudioQueueServices and Audio Unit graphs. Hopefully that will let you see where your issue is.

iPhone audio and AFSK

Here is a question for all you iPhone experts:
If you guys remember the sounds that modems used to make, or when one was trying to load a program from a cassette tape – I am trying to replicate this in an iPhone for a ham radio application. I have a stream of data (ASCII) and I need to encode it as AFSK at 1200 baud. So basically everything in the stream is converted to a series of 1200 and 2200 Hz tones. It needs to sound something like this: http://upload.wikimedia.org/wikipedia/commons/2/27/AFSK_1200_baud.ogg
I successfully built a bit array out of the string, but when I try to assign tones to each bit I get gaps in the sound, therefore it doesn’t demodulate correctly.
Any thought of how one should tackle this problem? Thank you.
The mobilesynth project is open-source. You might be able to scan that for code that generates the tones you need.
How are you assigning tones to the bits? Remember, a digital audio signal is just a stream of samples with values between -1 and 1. Perhaps there is a clipping issue between tone assignments. This can happen if the signal dives below -1 or above 1. If it stays above or below this range at a constant value, there will be no sound. Maybe you could output your stream of samples to check if this is the case. Or plug the output into an oscilloscope...
Also note that clicking can occur between "uneven" transitions of signals. For example if i output a sample with value 1 followed immediately by a sample with value -1, a click or pop will be produced.