Create Voice Frequency Graph when user record audio? [closed] - swift

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Im building a voice recording app and I would like to show a voice frequency graph similar to "Voice Memo" app on iPhone.
Im not sure exactly where to start build this.. could anyone give me some areas to look into and how to structure it? Ill then go learn all the areas and build it!
Thank you

Great Example Project by Apple:
https://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html
The top chart measures Intensity vs. Time. This is the most intuitive representation of a sound because a louder voice would show up as a larger spike. Intensity is measured in Percentage of Full-Scale (%FS) units where 100% corresponds to the loudest recordable sound by the device.
When a person speaks into a microphone, a voltage fluctuates up and down over time. This is what this graph represents.
The bottom chart is a Power Spectral Density. It shows where there is most power in the signal. For example, a deep loud voice would appear as a maximum at the lower end of the x-axis, corresponding to the low frequencies a deep voice contains. Power is measured in dB (a logarithmic unit) at different frequencies.
After a bit of Googling and testing, I think AVFoundation doesn't provide access to the audio data in real-time, it's a high-level API primarily useful for recording to a file and playing back.
The lower-level Audio Queue Services API seems to be the way to go (although I'm sure there are libraries out there that simplify its complex API).
Audio Queue Services Programming Guide:
https://developer.apple.com/library/mac/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AboutAudioQueues/AboutAudioQueues.html#//apple_ref/doc/uid/TP40005343-CH5-SW18
DSP in Swift:
https://www.objc.io/issues/24-audio/functional-signal-processing/

Related

Connecting an arduino, accelerometer and a camera [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am look at using accelerometer(s) as a wearable sensor to track the acceleration of someone's leg while they perform various motions. I would like to video/ take photos of the subject whilst the accelerometer(s) are collecting data. Is there someway to sync the camera with the data from the accelerometer? In order to draw the acceleration vectors on a frames/image from the camera. Therefore, the camera an accelerometer would have to be synchronised to be in real time. Could I use MATLAB?
I have actually done something similar in the past and it might give you a starting point.
I synchronized the video from a webcam and accelerometer data from an IMU connected to an arduino. I ended up programming most if it in Java but that's not really necessary you could probably do it in Matlab.
Assuming that you have already programmed the arduino to sample the accelerometer, you can send that data to a PC via a serial connection. Then you would connect the camera to the same PC, and use Matlab to start recording from both of them simultaneously.
It's far to complicated for me to explain all of the details in this post but I hope this gives you an idea of how to begin.
Goodluck!

Ideas for a 2D game for a neural network to play [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am currently trying to implement my own neural-network library, and I would like to test it by letting it (and networks made with other librarys) play a 2D game. The problem is I can't really find a good game for a neural network to play.
Requirements for the game:
It should not involve skills like reaction time, precision. It should instead require some tactical skills.
It should be easily scorable, in order to create an efficient evolutional algorithm.
It should be relatively simple.
It does not have to be a game that already exists, you can come up with one if you have an idea.
It may be a single-player game (like mario) or a 1v1 game (like pong).
It must not be any kind of MMO, RPG etc. I am looking for a small kind of mini-game.
The game should be well playable by a neural network. This means it should have a fixed amount of inputs somehow normalizable between 0 and 1. Inputs can be sensors, angles to closest objects etc. inputs should NOT be the pixels of the screen because 3*1920*1080 is just too much. Up to about 100 inputs are manageable (because I am a beginner and can't afford to let my computer calculate for hours just to get one generation evolved or so).
Also the game should definitely be a 2D game since i am going to use a AWT JPanel to draw on.
I'm the main developer of Neataptic.js, basically a neural network library with neuro-evolution built in to it. Just to give you some ideas, you might want to look at my following articles:
Agar.io AI
Target-seeking AI
Some other suggestions:
Snake
Flappy bird
Bomberman
Neural networks have been tested on most simple 2D games, so if you're stuck you will always find code that might help you.

Digital Audio Processing on the iPhone 4S [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We've been working with some digital audio signal processing here and we've recently become worried that the iPhone 4S might not behave like the iPhone 4 did.
For instance: we have an app that listens to a specific sound and works on it. However, the data generated by the sound, while pretty constant in the iPhone 4, varies a lot in the iPhone 4S. Even though it is the same sound every single time, the data pattern seems to be randomly different.
Another (maybe) important information: from what I can see from my tests, by now, the iPhone 4S doesn't seem to work well with frequencies above 20.5 kHz (the iPhone 4 works very well until 21.5 kHz).
My question is: did anyone already go through something like this? Are the iPhone 4 and iPhone 4S recording systems that different? Is this a hardware situation and/or should the software be modified to support it?
I know that those may not be proper questions, but I don't really know where to go right now, to be able to achieve some kind of diagnosis.
Thank you in advance.
The specification of both those iPhone models only states a frequency response up to 20 kHz. Anything above that might be subject to change without notice, not only between models, but possibly also for a given model if Apple is sourcing mics from multiple vendors. Furthermore, the roll-off behavior in both phase and frequency response between the 20 kHz limit and half the sampling rate can vary by a huge amount depending on the type and order of the anti-aliasing filters.
The frequency response can also vary depending on the direction of the sound and the directionality of the mic, which can also vary between mics and enclosures.

Music pitch affecting a game [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In windows media player, do you know that music visualization graph that changes based on frequency and pitch? What I want is to implement this into an iphone game.
I'll try to explain this as well as I can. I will be playing classical music in a game. I want to use the music's volume/pitch/whatever it is called, to affect gameplay. Like, if suddenly in the music, the volume raises (not the volume of the iphone, but the actual playing of the music) it would increase the chances of a spawn or something.
I'm not asking for a guide on how to implement this, I want to know if there is something that can give me numbers or something based on the pitch/volume/high and low notes of the song that was playing in a game.
Oh and if anyone can tell me what the name of the music graph I am looking for, it would be greatly appreciated.
This sample shows how to do what you want to do. The visualizer in WMP uses the amplitude (volume) of the signal as well as frequency information (using Fast Fourier Transform - probably) to construct the visualization effect.
You can also use the simpler AVAudioPlayer API, if you're interested in just responding to the music's current volume level (and if you want to skip the frequency analysis part). The API includes a callback that notifies your app periodically of the current audio volume.

How does CamScanner, Genius Scan, and JotNot work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was looking at CamScanner, Genius Scan, and JotNot and trying to figure out how they work.
They are known as 'Mobile Pocket Document Scanners.' What each of them do is take a picture of a document through the iPhone camera, then they find the angle/position of the document (because it is nearly impossible to shoot it straight on), straightens the photo and readjusts the brightness and then turns it into a pdf. The end-result is what looks like a scanned document.
Take a look here of one of the apps, Genuis Scan, in action:
http://www.youtube.com/watch?v=DEJ-u19mulI
It looks pretty difficult to implement but I'm thinking someone smart on stackoverflow can point me in the right direction!
Does any know how one would go about developing something like that? What sort of library or image processing technologies do you think they're using? Anyone know if there is something open source that is available?
I found an open source library that does the trick:
http://code.google.com/p/simple-iphone-image-processing
It probably is pretty difficult, and you will likely need to find at least some algorithms or libraries capable of detecting distorted text within bitmaps, analyzing the likely 2D and 3D geometric distortion within a text image, image processing to correct that distortion with its inverse, and DSP filtering to adaptively adjust the image contrast... plus use of iOS APIs to take photos in the first place.