Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In windows media player, do you know that music visualization graph that changes based on frequency and pitch? What I want is to implement this into an iphone game.
I'll try to explain this as well as I can. I will be playing classical music in a game. I want to use the music's volume/pitch/whatever it is called, to affect gameplay. Like, if suddenly in the music, the volume raises (not the volume of the iphone, but the actual playing of the music) it would increase the chances of a spawn or something.
I'm not asking for a guide on how to implement this, I want to know if there is something that can give me numbers or something based on the pitch/volume/high and low notes of the song that was playing in a game.
Oh and if anyone can tell me what the name of the music graph I am looking for, it would be greatly appreciated.
This sample shows how to do what you want to do. The visualizer in WMP uses the amplitude (volume) of the signal as well as frequency information (using Fast Fourier Transform - probably) to construct the visualization effect.
You can also use the simpler AVAudioPlayer API, if you're interested in just responding to the music's current volume level (and if you want to skip the frequency analysis part). The API includes a callback that notifies your app periodically of the current audio volume.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 10 months ago.
Improve this question
There's a digital card app called "Bumpp" that enables app to app communications when both devices brought near (akin to NFC, but this does not need the device to be put back to back). One device continuously broadcasting the sound, and other device continuously listen for any sound emitted. The app claims to emit ultrasound audio, but I guess audio in any frequency is good enough for me.
Is it possible to achieve the same via Flutter?
The sound emitted must be distinguishable from other audio, so my app doesn't accidentally pick up and translating sound from other audio picked up.
I just need to transmit a simple thing, like an integer or something. No need to transfer things like file.
Both app I presume will need some kind of agreed protocol, on what sound pattern to emit, and what sound pattern to listen.
Is there any good place where I can start reading about this?
I don't think there is a Flutter-specific library to achieve this, but since Flutter apps can use any native API via MethodChannel, there is no way it can't be achieved because it is Flutter.
Here's the working solution I found: qrtone
Unfortunately, it seems to have dead before reaching version 1.0, but the minimum requirements are already in place.
It has a Web demo and an Android demo. It also has a C port according to the readme (see bottom) which may be necessary for use on iOS.
As mentioned in the readme, the communication method itself is very similar to traditional communication methods such as dial-up connections or barcodes (not QR). If you want to implement from scratch, I suggest you start looking into that area.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Im building a voice recording app and I would like to show a voice frequency graph similar to "Voice Memo" app on iPhone.
Im not sure exactly where to start build this.. could anyone give me some areas to look into and how to structure it? Ill then go learn all the areas and build it!
Thank you
Great Example Project by Apple:
https://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html
The top chart measures Intensity vs. Time. This is the most intuitive representation of a sound because a louder voice would show up as a larger spike. Intensity is measured in Percentage of Full-Scale (%FS) units where 100% corresponds to the loudest recordable sound by the device.
When a person speaks into a microphone, a voltage fluctuates up and down over time. This is what this graph represents.
The bottom chart is a Power Spectral Density. It shows where there is most power in the signal. For example, a deep loud voice would appear as a maximum at the lower end of the x-axis, corresponding to the low frequencies a deep voice contains. Power is measured in dB (a logarithmic unit) at different frequencies.
After a bit of Googling and testing, I think AVFoundation doesn't provide access to the audio data in real-time, it's a high-level API primarily useful for recording to a file and playing back.
The lower-level Audio Queue Services API seems to be the way to go (although I'm sure there are libraries out there that simplify its complex API).
Audio Queue Services Programming Guide:
https://developer.apple.com/library/mac/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AboutAudioQueues/AboutAudioQueues.html#//apple_ref/doc/uid/TP40005343-CH5-SW18
DSP in Swift:
https://www.objc.io/issues/24-audio/functional-signal-processing/
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Hi i am indy developer .
as you know many of indy developer had tiny resources to use.
so i decide using garage band for my app
but i was frustrated what is this right thing ?
if i make sound and effect using garageband and it's preset or resources
it's legal to use?
Yes, it is. Apple's sounds effects are royalty-free.
i. GarageBand/Jam Pack Software. You may use the Apple and third party audio loop content
("Audio Content"), contained in or otherwise included with the GarageBand/Jam Pack Software, on a
royalty-free basis, to create your own original soundtracks for your video and audio projects. You may
broadcast and/or distribute your own soundtracks that were created using the Audio Content,
however, individual samples, sound sets, or audio loops may not be commercially or otherwise
distributed on a standalone basis, nor may they be repackaged in whole or in part as audio samples,
sound libraries, sound effects or music beds.
(from the iLife SLA - Software License Agreement)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What is the best ACR (auto content recognition) technology for building a second screen television app with ?
Potential solutions may include: tvsync, tvtak.tv, civolution, and audible magic.
Civolution primarily provides watermarking technology and Audible Magic provides digital fingerprinting. Watermarking is good for forensics but Fingerprinting is better suited for second screen applications. TVtak requires you to use the camera on the phone which might be less convenient for users to use. Both Civolution and Audible Magic listen via the microphone. TVsync is new to the market and is unproven. Audible Magic has probably been around the longest and owns many patents which gives them a significant advantage.
With watermarks, a barely detectible tone needs to be inserted into the original content during production. That is not the case with fingerprinting.
As pointed out above, there is no straight forward answer on "what is the best ACR ?" as it depends on the requirements and what use case you are trying to cover and what content you are trying to deliver. We at mufin offer a audio fingerprinting technology that is very robust and is optimized for mobile applications that do not necessarily require an internet connection. Compared to watermarking, audio fingerprinting does not require a modification of the reference audio signal which is one of the main advantages.
you may check syntec tv (audio fingerprinting solution) for what they offers. It is really efficient and fast recognition which they can provide.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Does anybody know which,currently,is the best library for realizing a real time face-tracking solution for iPhone? I've done a research but I've found quite old articles about OpenCV portings. I would like to know if there is any specific,reliable,fast (and possibly free) AR solution for overlay in real time an image to the face in iPhone camera Video Stream (not simply a static image)
Any help (link,tutorial) would be great.
Thanks everybody!!
Elos
iOS 5 brings facial recognition as a native feature.
Basically you just have to configure an object to act as your the video output stream’s delegate (could be your controller, for example) and use a CIDetector object to process this stream (which is a class available only in iOS 5).
This CIDetector object will look for the faces in each of your video's frame and return a CIFaceFeature object with several information about the faces found, such as the eyes and mounth position and also the bounds (the rectangle that the face was found inside).
You can check this blog for more implementation details:
https://web.archive.org/web/20130908115815/http://i.ndigo.com.br/2012/01/ios-facial-recognition/
opencv is the best i think.
checkout this tutorial:
http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/
https://github.com/beetlebugorg/PictureMe
a starting point... he's using opencv.