No sound output when running jsyn on raspberry pi (Raspbian Jessie) - raspberry-pi

I've started writing a java synthesizer using the jsyn library. It works well in windows and osx, but when I run it on Raspbian. When starting the program i notice some activity in the headphone output though, it starts to output some silent noise, but no clear loud sawtooth wave like it does on Windows and OSX. Which sound device is the correct one to select as output when starting the synthesizer if I want to use the headphone jack? There are 4 avaliable when I run the AudioDeviceManager.getDeviceCount()

It is hard to know which of the 4 devices to use. This example will list them by name and also indicate which one is the default input or output.
https://github.com/philburk/jsyn/blob/master/tests/com/jsyn/examples/ListAudioDevices.java
It is also possible that the CPU cannot keep up. Try just playing a single oscillator. A sine wave is good because then you can easily hear any click or distortion. Here is an example that does that:
https://github.com/philburk/jsyn/blob/master/tests/com/jsyn/examples/PlayTone.java

Related

How could 2 more than audio program speaks out simultaneously on a speaker?

I've got a question While I'm studying in operating system.
As I learned, Operating System is a kind of resource manager and audio program works in my PC will use the speaker as a resource. So Audio program will use allocated speaker by OS.
When I executes more than two processes of audio program in PC, Sounds of them come out from speaker simultaneously.
I wonder what is the mechanism for this. Are they processes hold & release the resource when they are in running & ready state? Or the Processes share the resource by OS?
Multiple sounds can be mixed together additively. For software, this mostly means a small buffer of samples where you add (with saturation) the samples from 2 or more streams of digitized audio before sending the result to the speaker/s. Of course sound cards are also likely to be capable of doing this mixing themselves (with some hardware specific limit on the max. number of streams that can be handled).
For the "PC speaker" there's no support for digitized sound or much else (it only supports "one fixed frequency tone at a time"). If that's what you're asking about then you can (with a relatively high amount of overhead) use pulse-width modulation (using a pair of timers) to force it to play digitized sound and still do the mixing in software. Alternatively you can nerf the audio such that only one tone occurs at a time (e.g. if there's 2, pick the highest frequency tone or make one wait until the other is finished).

Gstreamer crackling sound on Raspberry Pi 3 while playing video

I am playing a hardware accelerated video on the Raspberry Pi 3 using this simple pipeline:
gst-launch-1.0 playbin uri=file:///test/test.mp4
As soon as the video begins to play, any sound being played in parallel using ALSA begins to crackle (tested with gstreamer and mplayer). It's a simple WAV-file and I am using a USB audio interface.
Listening to the headphone jack already crackles without playing an audio file (but this jack is very low quality and I don't know if that's a different effect).
Playing the audio in the same pipeline as the video does not help. CPU is only on approx. 30 % load and there's plenty of free memory. I already overclocked the SD card. Playing two videos in parallel with omxplayer has no impact and sound still plays well. But as soon as I start the pipe above, the sound begins to crackle.
I tried "stress" to simulate high CPU load. This had no impact either, so CPU does not seem to be the problem (but maybe the GPU?).
This is the gstreamer pipeline to test the audio:
gst-launch-1.0 filesrc location=/test/test.wav ! wavparse ! audioconvert ! alsasink device=hw:1,0
GST_DEBUG=4 shows no problems.
I tried putting queues on different places but nothing helped. Playing a video without audio tracks works a little bit better. But I have no idea, where the ressource shortage may lie, if it even is one.
It somehow seems like gstreamer is disturbing audio streams.
Any ideas where the problem may be are highly appreciated.
It seems like the USB driver of my interface is expecting a very responsive system. I bought a cheap new USB audio interface with a bInterval value of 10 instead of 1 and everything works fine now. More details can be found here: https://github.com/raspberrypi/linux/issues/2215

Zero-value data in createmediastreamsource input buffer when recording using web-audio

I am attempting to record live audio via USB microphone to be converted to WAV and uploaded to a server. I am using Chrome Canary (latest build) on Windows XP. I have based my development on the example at http://webaudiodemos.appspot.com/AudioRecorder/index.html
I see that when I activate the recording, the onaudioprocess event input buffers (e.inputBuffer.getChannelData(0) for example) are all zero-value data. Naturally, there is no sound output or recorded when this is the case. I have verified the rest of the code by replacing the input buffer data with data that produces a tone which shows up in the output WAV file. When I use approaches other than createMediaStreamSource, things are working correctly. For example, I can use createObjectURL and set an src to that and successfully hear my live audio played back in real time. I can also load an audio file and using createBufferSource, see that during playback (which I hear), the inputBuffer has non-zero data in it, of course.
Since most of the web-audio recording demos I have seen on the web rely upon createMediaStreamSource, I am guessing this has been inadvertantly broken in some subsequent release of Chrome. Can anyone confirm this or suggest how to overcome this problem?
It's probably not the version of Chrome. Live input still has some high requirements right now:
1) Input and output sample rates need to be the same on Windows
2) Windows 7+ only - I don't believe it will work on Windows XP, which is likely what is breaking you.
3) Input device must be stereo (or >2 channels) - many, if not most, USB microphones show up as a mono device, and Web Audio isn't working with them yet.
I'm presuming, of course, that my AudioRecorder demo isn't working for you either.
These limitations will be removed over time.

Using SoX vad without an audio device

I am trying to use the SoX vad (voice activity detection) feature to analyze a wav file to determine if it contains speech (unsurprisingly.) However, I am using it on the command line on a Linux server that has no audio device. I would expect that I should be able run the command and capture the output somehow, but it seems like the vad feature is dependent on using the "play" command and that appears to be dependent on an audio device.
Is there a way that I can do this without an audio device?
Works here, how did you run it? Here's what I did:
sox infile.wav outfile.wav vad
outfile.wav is trimmed at the front until voice is detected.

Beat Detektion: Detecting audio from app and produce a wave or bar for that audio... Just like Winamp or Windows Media Player

I am trying to understand the concept of Beat Detektion and I found that it works on the basis of detecting sound through Microphone. So, my first question is will it not be a disadvantage if i am detecting sound from Microphone? Because when we are using the device it happens that other sound from environment is also there so the actual beat will not produce for sound.
My second question (actually where i got stuck) I found that this Beat Detektion is not able to access iPod Library. Will i be able to play beats if i fetch the song from ipod Library in my app and then i use with beat detektion.
http://www.cubicproductions.com/index.php?option=com_content&view=article&id=67&Itemid=82
http://www.gearslutz.com/board/product-alerts-older-than-2-months/457617-beatdetektor-iphone-app-open-source-algorithm-bpm-detection.html
I will be very thankful if any reference/link other then above provided to understand Beat Detektion more...
Edit 1
I have got the code for the above from this link But this code is in C++ and there it is written that we have to convert the code to XCODE project using CMAKE software. I am somehow able to convert the code to xcode project but then i am only having cpp files then how should i run the program in iphone???
ok I am somehow able to solve my problem with the Apple's sample code : AurioTouch
I have implemented song in that example and produced the beats of the song on the basis of the beats.. In Iphone we can access sound beat using mic only. So aurioTouch uses same for beat detection