I have a script that uses Balloon Tips to alert users to progress. However, by default balloon tips make an annoying noise. In a perfect world I would be able to set my balloon tips to be silent, while leaving any other alerts to behave in the normal way. But I don't see anything along the lines of $balloontip.balloonTipMute=$true.
My second thought was to mute the machine for the duration of my test, and I found this thread that seems to be what I want. However, the Core Audio API approach Alexandre mentions is not working for me, and the simple SendKeys approach doesn't work if the machine is already muted, as in that case I turn sound on right before proceeding to annoy the snot out of the user.
So, starting from the bottom, is there a way in PowerShell to see if audio is already muted, so I can toggle only if needed? Or, can someone verify that the Core Audio API approach really should work from Windows 7 on, and I need to start looking for a mistake? Or best yet, is there a secret sauce that makes Balloon Tips silent?
Related
I have a multiplayer project which has some forever loops with checking code inside of them.
The problem is, multiple computers might process this and change crabx or craby due to lag in the variables dvotes, uvotes, lvotes, or rvotes. Only one machine should change this, though.
This can be easily solved by giving each player an ID like many people do in SQL. I would just check if the ID is 1, and that would be the "operating machine". I would then do all of these checks on that one machine. It would do things a Scratch server would do if you could program it...
The problem with this is that there is no way to detect when a player leaves the game. There is no block that is called "on exit" or "on stop button pressed". How would I go about doing this? I have seen people have a button which people click to exit, but some people will not click it/not even see it.
Thanks in advance!
Option 1
I've never been especially successful with cloud data myself, but I've heard the theory on this before:
Essentially, each player gets a "counter". Their computer then constantly increases that counter. If the counter ever stops increasing (which will be detected by the other computers, who are all looking after one another), the project will know that the user has left and one of the computers will take care of removing their ID and other data.
Obviously, this is much easier said than done. (As I said, I've never gotten complex cloud data to work well for myself, but I've seen it done successfully and explained.)
Option 2
Alternatively, you might be better off taking advantage of this cloud api created by MegaApuTurkUltra. I find that stealing from others tends to be the best way of solving problems when it comes to code. ;)
I'm getting pretty bad results with Area Learning, the localization takes very long and I have no idea what's happening. Did I map the area enough? Is there enough landmarks? Is the ADF alright? No clue.
Is there any way to provide a visual feedback while doing the actual motion tracking and area learning? I keep seeing it in Google videos but didn't find any way of doing it in the Unity SDK.
I would like something like this in my video overlay: https://youtu.be/NTZZCtmR3OY?t=10m57s
Btw my results in Unity are FAR worse than this demo, sometimes it takes minutes for the device to localize and only at a certain spot in the room, the next minute the very same spot doesn't work again. Quite frustrating. No idea what app the presenter uses, for instance, my ADF Inspector reliably crashes every time I try to load any ADF. (Using Wasat and recently I've deleted and re-installed everything.)
It is not supposed to be so. It should not be that bad, under the normal day light condition. If it is a small area, should be able to localize using ADF within 3-5 seconds. The video showed the usual case -- always like this.
If your kernel is up to date, and you are using the correct development kit. I would recommend you to reach the tango team customer service directly.
tango-help#google.com
Perhaps it is caused by your device defective. If so, ask for an exchange would solve the problem.
There is a real time audio app for iphone, that adds some effects (reverb, delay, etc.) to input sound and plays it back.
So I'm having a classic amplified audio loop issue. You probably are familiar with this. It happens often when you put the mic close to the loudspeaker (sound from input gets amplified, goes out, gets back in and so on).
It would be great to hear any ideas how to fix this.
(I already tried to:
Limit max sound volume to prevent feedback from growing.
Use filters, to limit some frequencies.
Subtracting previously output signal from new input signal (which, I think, is the best way, but this isn't perfect. Even if timing is good (I think so) this method spoils the sound too much)
Thanks.
Your number 3 and number 2 combined are probably the best. Look up adaptive acoustic echo cancellation.
AEC using nLMS is quite easy to implement but takes a bit of CPU. It may work if you use a lower sample rate, depending on how long in ms your echo is.
There is a fast version that uses an FFT for adaption. It doesn't adapt as quickly but will probably be fine on a mobile app where there isn't a long echo tail.
The way AEC works is that it converges on an acoustic model for the echo path between speaker and microphone and then uses that model to subtract the output echo from the microphone input. It knows what is going out, it puts that through the model and obtains a guess as to what the echo will be, then removes that echo from the input. As time goes on, the model gets better and the echo smaller.
You might already know this, but just to be on the safe side - make sure you're routing the output to the right speaker. As it says in the docs when you set the "play and record" audio session category, the default output is the top speaker (the one you put your ear to during a call). There's another speaker at the bottom, and since it's a lot nearer to the microphone, it'll produce a lot more feedback. If you set the "play and record" category it would normally take a manual override to route to the wrong (bottom) speaker, but I thought I'd mention it to be sure.
To help other people trying to solve this issue: AEC plus a combination of high-pass, low-pass filters.
http://speex.org, it's AEC part does the job. High-pass, low-pass filters are quite easy to implement. (see Apple AccelerometerGraph example for LP, HP filter implementations)
I've been doing some tests recently with an app switching between networks (Wi-fi, 3G, LTE, offline). I've been using Reachability when detecting this switches, but I'm not currently happy with the implementation when the app goes in an "offline state" for example.
I'm basically just throwing NSLogs currently when the no-network kicks in, but I've seen it go off in between switches. So my question... how do you best manage these things? Do you give it a delay in after no-network of a few seconds before going into "offline state"? Or are there other ways to improve this?
Big issue when streaming audio.. I wouldn't want to go into this offline state when is just a simple network switch or small connection loss. One of the things I would do is to wait for the buffer to be empty before changing states.
Yes, just check twice. Using your example, when you get the "offline" notification, you flip a flag (BOOL claimingOffline). Then when your buffer empties you check the status. If back online, you unflip the afore mentioned flag. If still offline, you flip the flag and go into "offline mode". This technique allows you to wait until the moment you really need to know (when the buffer empties). Otherwise you could use a timer, but it's suboptimal and not nearly as elegant...
Is there a way to simulate user activity on desktop on Windows? This is the situation: A friend of mine works from his home. His company recently decided to provide their employees with a communication tool which they have to keep running in the background. Apart from its main functionality it also has a very intimidating side effect: It tracks user activity. This means that the programm monitors keystrokes and mouse movements. If a user is idle for say 5 minutes or something, an icon next to his name indicates his idle status to all other users, much similar to instant messengers like skype for example. Now while this may be useful in IM programms, we both find it a bit disturbing in a work related context, for obvious reasons.
Doing some google search only gave me shareware links or cheating tools for MMORPGs. But maybe I searched for the wrong terms. My first guess would have been to have a small process running in the background which imitates keystrokes or mouse movement in regular intervals. But maybe there is another way to deal with this. (Oh, and complaining about lack of privacy to the employer is not an option ;) Also please note that I don't want to promote laziness or question an employer's rights over his employees.)
Any comments and help appreaciated. Thanks!
There is an easy way to make cursor move in C++.
its something like:
pos.X = 10;
pos.Y = 10;
I dont know if this is the best way, but it works.
If you dont want to program your own program, Im sure there are a lot of programs on the internet. You just need to google :) .