I just finished installing amazon echo on my raspberry pi.
Im thinking how could I change the beep sound (feedback sound) after you say "alexa" to something else. Like an audio file which says Im listening.
Anyone ever tried or thought about this?
Maybe I could change it before install somewhere but I found nothing.
I don't think that's difficult. Look in:
samples/javaclient/src/main/resources/res/
and tweak the files alarm.mp3, error.mp3, start.mp3 and stop.mp3 to taste.
Related
In my internet research I often found how to update a main.py or the firmware with LoRa but with never how I can update my GPy with LTE.
In my thoughts I would do something like firstly in the boot.py setup the LTE and let it subscribe of MQTT where the new version of the program is. Then the GPy should start the update with the new file and get a new main.py.
Is this even possible or can I only update the main.py with LoRa? Next to this how could a code look like? I have to say I am quite new to the pycom and never used one before. What I can is simple micropython stuff for esp32.
Another person asked the same question like me but didnt get any helpful answer. Link to the question: OTA on Pycom gPy
Thank you for helping.
My question has to do with the libasound function named "snd_pcm_hw_params" in connection with code to play a sound file. I am new to ALSA programming. Using a coding example I found on the internet, I wrote a small program to play a 7 second .wav file to the default sound card. When I run this code several times in a row, occasionally (but not always) the requisite call to snd_pcm_hw_params to write the previously filled in snd_hw_params_t struct to the driver, I get back an error code of -2 (ENOENT). I have no idea what this means, nor how to handle nor prevent it. My code just emits an error message and bails. Usually, if I run it again, the code runs fine. Its fine for my use, but eventually, this code is supposed to be given to a non-programmer to use, and I'd like to either prevent the error, or resolve it internally without involving said non-programming user. I note hear that the user is supposed to be able to cause an early abort of the program by clicking a button, and when this happens, my code calls snd_pcm_drop, followed by snd_pcm_close. If the program runs to completion, and plays all 7 seconds of the wav file, then it finishes up by calling snd_pcm_drain, followed by snd_pcm_close. Any help or suggestions would be greatly appreciated. :)
I'm trying to execute a method when the system volume changes.
I've tried using
DistributedNotificationCenter.default().addObserver(self,selector: #selector(volumeChanged(_:)),name: NSNotification.Name(rawValue: "com.apple.sound.settingsChangedNotification"),object: nil) but it didn't work.
Well, it does work. But only if the System Preferences app is open.
What's the right way to accomplish this task?
Ps: note that it's on MacOS, not iOS
After trying countless ways I found a nice workaround: instead of searching for a probably non-existent notification, I try to get the physical key-press event.
As the media keys are not sending a normal CGEvent I came up with this solution: Capture OSX media control buttons in Swift
Note that the TouchBar simulates such a key event, so any app that you'll write using this method will also work for those MacBook models which have the TouchBar.
It's probably not the ideal solution, but it works. If anyone knows a better way please let me know.
I have a fairly straightforward setup in which a RemoteIO unit is taking input, doing a bit of processing, sending it out the output, and writing the output to a file. Right now, I'm just generating test signals inside of my RemoteIO render callback, so I don't really care about anything coming from the 'actual' input. My render callback is called and works a treat in the simulator, but is never called at all when run on the phone. Any ideas where I should start looking? Am happy to post code--just not sure what everyone would like to see...
I knew that things had worked in the past, so I started digging through the repo. Foolishly, I had changed the kAudioSessionProperty_AudioCategory of my AudioSession from kAudioSessionCategory_PlayAndRecord to kAudioSessionCategory_RecordAudio and forgotten to change it back. Hope this helps someone else avoid the same stupid mistake...
Just an hour ago I was solving the same. The problem was that I had defined AudioUnit type variable in the header file, so after I used AudioComponentInstance instead of AudioUnit it started to work on my devices as well.
So possibly could be this.
I have a FLEX 3 frontend that worked fine in FlashPlayer 9. But now that I've upgraded to FlashPlayer10, the ?_method=PUT/DELETE hack is not working anymore. All those requests show up as a POST on the backend now. I did some reading and it sounds like I need to use as3httpclientlib AND run a socket policy server to give access to port 80 (or any other port) in order to use as3httpclientlib. So my question is: are you freakin kidding me? How is it that, in earlier versions of Flex/flash player, all I had to do was add a simple string ("_method=") to the url. But now I have to do the hokey pokey AND turn my self around? Really? Please someone tell me that I've got this all wrong, and that _method= is, in fact, still supported. Otherwise, its BYE BYE FLEX/FLASH PLAYER - NEVER AGAIN!
I understand you very much, my friend, just had similar noise with authentication headers in new one Security Policy from Adobe. Anyway, if you'll share your code, maybe I could help you, because your goal is not clearly explained in your post, Buddy. So let me know please, cause java sandbox and silverlight are not the better way in this case. But maybe some javascript stuff could save our time.