We want a robot (roomba in our case) to know it's location in a given room - matlab

OK,
So we want our robot - roomba (the nice vacuum cleaner) to know it's location in a given room.
That means we have the map of the room and the robot is put somewhere and needs to know in a short time where it is located.
We saw a lot of algorithms - where the most relevant one was MCL (monte carlo algorithm) for localization of robots in space.
We are afraid that it is too big for us and don't know where to start from.
We would like to write the code in MATLAB.
So if anyone have any idea where we can find a code - we would apprecate it a lot.
We are open minded about the algorithm - so if you have a better one or something that might work, that will be great. That goes to the language we are writing it in.
Thanks.
Liron.

Interesting.
I've read a lot about trying to keep track of where the roomba is, but it seems like every system that has used only "internal" feedback from the roomba has ended disastrously. Meaning they try to keep track of the wheel locations etc... The main problem is that you can't take into consideration the wheel slip you get and will drastically change based on surface and other factors.
I would recommend using either a stationary based sensor that the roomba can locate from, on-board diagnostic sensors (such as a camera, wiskers, ultrasonic), or a combination of the two.
STAMP makes a great ultrasonic sensor package called the PING((( that can sense up to 6ft. I've used it up to 15 feet, but it works great in close proximity for mapping.
hope this helps!

Related

Recording multi-channel audio input in real-time

I am trying to perform Time Difference of Arrival in real-time using the PS3 Eye. Since it has a built-in 4 microphone array, I've successfully rearranged the array into a square array and cross-correlated the signals using MATLAB to obtain a relatively accurate TDOA algorithm. However, so far I've been recording the signal, saving the files (4 individual files for each microphone in the array), and then feeding those files into MATLAB to read after-the-fact.
My problem is: MATLAB doesn't recognize the PS3 Eye's microphones separately; it only recognizes it as a whole. So far, Audacity is one of the few programs that actually works well in doing so, but I am inexperienced in using the program and don't know its real-time capabilities. Anyone have suggestions as to how can I can perform real-time signal analysis in this manner? If using something else besides the PS3 Eye would work better, then I am open to suggestions. Thanks.
I know very little about MATLAB or PS3 eye, but various hardware microphones allow you to capture a single audio stream containing multiple (typically 2) channels. The audio data will come to you in frames, each frame containing a single sample for each channel.
I'm not really sure what you mean by "recognizes as a whole", but I assume you mean MATLAB is mixing the channels so that the device only produces one usable channel. If you can capture the channels to file, and they all originate from the same device (i.e. hardware clock), you should be fine except that this solution is not "realtime".
There is a similar discussion on Sound Exchange which ends up suggesting the Microcone. There are a variety of other products, from microphone arrays to digital mixers for analog mic sources, also, but your question seems to be mainly about how to get the data with software.
In short, make sure you are seeing a single device with multiple channels. This will ensure each channel uses the same hardware clock and will prevent drift issues.
This is just a wild guess as I don't know know about MATLAB real time input options.
Maybe try reaper ( http://www.reaper.fm/ ).. it has great multi track capabilities and you can extend it (I think the scripting language is python ). Nice documentation and third party contributions, OSC and Rewire support. So maybe you could think of routing the audio to reaper, doing some data normalization there in python and then route data to MATLAB.
Or you could use PURE DATA which is open source and very open, with lots of patches (basic processing units) that you could probably put together.
HTH
BTW I am in no way affiliated wit reaper or PD.
EDIT: you might also want to consider supercollider (http://supercollider.github.io/) or Chuck (http://chuck.cs.princeton.edu/)
Here's a lead, but I haven't been able to test it, yet.
On Windows, you can record a single 4 track ogg audio file from the Eye with Audacity (using the WASAPI driver selection).
As of 23 Jul 2014, the pa-wavplay for 32-bit and 64-bit MEX supports WASAPI. You will have to rebuild the PortAudio library to select the WASAPI interface as described here and get all four tracks in MatLab (in Windows).
Sadly, if you're not on Windows, I don't have any suggestions. Adjusting the PortAudio build might help, but I only know that WASAPI works with the Eye.

Audio feedback issue in an iphone application

There is a real time audio app for iphone, that adds some effects (reverb, delay, etc.) to input sound and plays it back.
So I'm having a classic amplified audio loop issue. You probably are familiar with this. It happens often when you put the mic close to the loudspeaker (sound from input gets amplified, goes out, gets back in and so on).
It would be great to hear any ideas how to fix this.
(I already tried to:
Limit max sound volume to prevent feedback from growing.
Use filters, to limit some frequencies.
Subtracting previously output signal from new input signal (which, I think, is the best way, but this isn't perfect. Even if timing is good (I think so) this method spoils the sound too much)
Thanks.
Your number 3 and number 2 combined are probably the best. Look up adaptive acoustic echo cancellation.
AEC using nLMS is quite easy to implement but takes a bit of CPU. It may work if you use a lower sample rate, depending on how long in ms your echo is.
There is a fast version that uses an FFT for adaption. It doesn't adapt as quickly but will probably be fine on a mobile app where there isn't a long echo tail.
The way AEC works is that it converges on an acoustic model for the echo path between speaker and microphone and then uses that model to subtract the output echo from the microphone input. It knows what is going out, it puts that through the model and obtains a guess as to what the echo will be, then removes that echo from the input. As time goes on, the model gets better and the echo smaller.
You might already know this, but just to be on the safe side - make sure you're routing the output to the right speaker. As it says in the docs when you set the "play and record" audio session category, the default output is the top speaker (the one you put your ear to during a call). There's another speaker at the bottom, and since it's a lot nearer to the microphone, it'll produce a lot more feedback. If you set the "play and record" category it would normally take a manual override to route to the wrong (bottom) speaker, but I thought I'd mention it to be sure.
To help other people trying to solve this issue: AEC plus a combination of high-pass, low-pass filters.
http://speex.org, it's AEC part does the job. High-pass, low-pass filters are quite easy to implement. (see Apple AccelerometerGraph example for LP, HP filter implementations)

Can an NXT brick really be programmed with Enchanting and adaption of Scratch?

I want my students to use Enchanting a derivative of Scratch to program Mindstorm NXT robots to drive a pre-programmed course, follow a line and avoid obstacles. (Two state, five state and proportional line following.) Is Enchanting developed enough for middle school students to program these behaviors?
I'm the lead developer on Enchanting, and the answer is: Yes, definitely.
The video demoing Enchanting 0.0.4 shows how to make a proportional line follower (and you could extend it to use a PID controller, if you wish). If you download the latest version, 0.2.2, it includes a sample showing a two-state line follower (and you can see a video and download code here). You, or with some instruction / playing around, a middle-schooler, can readily create a program to do n-states, and, especially if you follow a behaviour-oriented approach, you can avoid obstacles at the same time.
As far as I know, yes and no.
What Scratch does with its sensor board, Lego Wedo, and the S4A - Scratch for Arduino - version as well as, I believe, with NXT is basically use its remote sensor protocol - it exchanges messages on TCP port 42001.
A client written to interface that port with an external system allows communication of messages and sensor data. Scratch can pick up sensor state and pass info to actuators, every 75ms according to the S4A discussion.
But that isn't the same as programming the controller - we control the system remotely, which is already very nice, but we're not downloading a program in the controller (the NXT brick) that your robot can use to act independently when it is disconnected.
Have you looked into 12blocks? http://12blocks.com/ I have been using it for Propeller and it's great and it has the NXT option (I have not tested)
It's an old post, but I'll answer anyway.
Enchanting looks interesting, and seems to be still an active project.
I would actually take the original Scratch (1.4), as it's is more familiar and reliable.
It's easy to interface hardware with Scratch using the remote sensor protocol. I use a simple serial interface (over a USB-adapter) which provides 3 digital inputs and 3 digital outputs. With that, it's possible to implement projects such as traffic lite, light/water/heat-sensors, using only lets, resistors, reed-contacts, photo-transistors, switches, PTSs.
The costs are < 5$
For some motor-based projects like factory belts, elevator, etc. There is not much more required, a battery and a couple of transistors/relais/motor driver.

How is time-based programming done?

I'm not really sure what the correct term is, but how are time-based programs like games and simulations made? I've just realized that I've only wrote programs that wait for input, then do something, and am amazed that I have no idea how I would write something like pong :)
How would something like a flight simulator be coded? It obviously wouldn't run as fast as the computer could run it. I'm guessing everything is executed on some kind of cycle. But how do you handle it when a computation takes longer than the cycle.
Also, what is the correct term for this? Searching "time-based programming" doesn't really give me helpful results.
Games are split into simulation (decide what appears, disappears or moves) and rendering (show it on the screen, play sounds). Simulation is designed to be time-dependent: you can tell the simulator "50ms have elapsed" and it will compute 50ms worth of simulation. A typical game loop will render (which takes an arbitrary amount of time), then run the simulator for the duration since the last time the simulator was run.
If the code runs fast, then the simulator steps will be short (only a few ms) and the game will render the scene more often.
If the code runs slowly, the simulator steps will have longer steps and there will be proportionally fewer renders.
If the simulator runs slower than the simulation itself (it takes 100ms to compute 50ms worth of simulation) then the game cannot run. But this is an exceedingly rare situation, and games sometimes have emergency systems that drop the quality of the simulation to improve performance when this happens.
Note that time-dependent does not necessarily mean millisecond-level precision. Some systems implement simulations using time-based functions (traveled distance equals speed times elapsed time), while others run fixed-duration simulation steps.
I think the correct term is "Real-time application".
For the first question, I'm with spender's answer.
If you know the elapsed time between two frames, you can calculate (with physics, for example) the new position of the elements based on the previous ones.
There are two approaches to this, each with advantages and disadvantages.
You can either go frame based, whereby a timer signals n new frames every second. You calculate movement simply by counting elapsed frames. In the case that computation exceeds the available time, the game slows down.
...or, keeping the frame concept, but this time you keep an absolute measure of time, when the next frame is signalled, you calculate world movement via the amount of elapsed time. This means that stuff happens in real-time, but in the case of severe CPU starvation, gameplay will become choppy.
There's an old saying that "the clock is an actor". Time-based programs are event-driven programs, but the clock is a constant source of events. At least, that's a fairly common and reasonably easy way of doing things. It falls down if you're doing hard realtime or very high performance things.
This is where you can learn the basics:
http://www.gamedev.net/reference/start_here/
Nearly all of the games are programmed in real time architecture and the computer capabilities(and the coding of course :)) determine the frame rate.
Game programming is a really complex job including object modeling, scripting, math calculations, fast and nice rendering algorithms and some other stuff like pixel shaders.
So i would recommend you to check out available engines in the first place.(just google "free game engine")
Basic logic is to create an infinite loop (while(true){}) and the loop should:
Listen for the callbacks - you get the keyb, mouse and system messages here.
Do the physics due to the time passed till the previous frame and user inputs.
Render the new frame (gdi, derictX or openGL)
Have fun
Basically, there are 2 different approaches that allow you to add animation to a game:
Frame-based Animation: easier to understand and implement but has some serious disadvantages. Think about it this way: imagine your game runs at 60FPS and it takes 2 seconds to draw a ball that goes from one side of the screen to the other. In other words, the game needs 120 frames to move the ball along the screen. If you run this game on a slow computer that's only able to render 30FPS, it means that after 2 seconds the ball will be at the middle of the screen. So the problem of this approach is that rendering (drawing the objects) and simulation (updating the positions of the objects) are done by the same function.
Time-based Animation: a sophisticated approach that separates the simulation code from the rendering code. The amount of FPS the computer can render will not influence the amount of movement (animation) that has to be done in 2 seconds.
Steven Lambert wrote a fantastic article about these techniques, as well as 3rd approach that solves a few problems with Time-based Animation.
Some time ago I wrote a C++/Qt application to demonstrate all these approaches and you can find a video of the prototype running here:
Source code is available on Github.
Searching for time-based movement will give you better results.
Basically, you either have a timer loop or an event triggered on a regular clock, depending on your language. If it's a loop, you check the time and only react every 1/60th of a second or so.
Some sites
http://www.cppgameprogramming.com/
Ruby game programming
PyGame
Flight Simulation is one of the more complex examples of real-time simulations. The understanding of fluid dynamics, control systems, and numerical methods can be overwhelming.
As an introduction to the subject of flight simulation, I recommend Build Your Own Flight Sim in C++. It is out of print, but seems to be available used. This book is from 1996, and is horribly dated. It assumes a DOS environment. However, it provides a good overview of the topics, covers numerical integration, basic flight mechanics and control systems. The code examples are simplistic, reasonably complete, and do not assume the more common toolsets used for graphics today. As with most things, I think it is easier to learn the subject with a more basic reference.
A more advanced text (college senior, first year graduate school) is Principles of Flight Simulation provides excellent coverage of the breadth of topics involved in making a flight simulation. This book would make an excellent reference for anyone seriously interested in flight simulation as an engineering task, or for more realistic game development.

How the dynamics of a sports simulation game works?

I would like to create a baseball simulation game.
Are these sports management games based on luck? A management game entirely based on luck is not fair, but it cannot be too predictable either. How does the logic behind these games work?
It's all about probability and statistics. You set the chance of something happening based on some attributes you assign, and then the random factor comes in during play to make things less predictable and more fun. Generally you get a load of statistics from some external source, encode them into your game's database, and write a system that compares random numbers to these statistics to generate results that approximate the real-life observations that the stats were based on.
Oversimplified example: say your game has Babe Ruth who hits a home run 8.5% of the time, and some lesser guy who hits one 4% of the time. These are the attributes you test against. So for each pitch you simulate, pick a random number between 0 and 100%. If it's less than or equal to the attribute, the batter scores a home run, if it's greater than the attribute, they don't. After a few pitches you'll start to see Babe Ruth's quality show relative to the other guy as he will tend to hit over twice as many home runs.
In reality you'd have more than 1 attribute for this, depending on the kind of pitching for example. And the other player might get to choose which relief pitchers to use to try and exploit weaknesses in the batter's abilities. So the gameplay comes from the interplay between these various attributes, with you trying to maximise the chance that the attribute tests work in your favour.
PS. Apologies for any mistakes regarding baseball: I'm English so can't be expected to understand these things. ;)
As you have already figured out, the core component of such games is the match simulation engine. As Spence said so, you want that simulation to "look right" rather than to "be right".
I worked on a rugby game simulation some time ago and there's an approach that works quite well. Your match is a finite state machine. Each game phase is a state, has an outcome which translates to a phase transition or changes in game state (score, replacements, ...).
Add in a event/listener system to handle things that are not strictly related to the structure of the game you're simulating and you have a good structure (everytime something happens in your simulation, a foul for instance, fire an event; the listeners can be a comment-generation system or an AI responsible for teams' strategies).
You can start with a rough simulation engine that handles things at a team level using an average of your players' stats and then move on to something more detailed that's simulating things at a player level. I think that kind of iterative approach suits a game simulation very well because you want it to look right, and as soon as an element looks right you can stop iterating on it and work on another part of your system.
Random is of course part of the game because as you said so, you don't want games to be too predictable. A very simple thing to do is to have virtual dice rolls against a player and team statistics when they are performing a particular action (throwing the ball for instance).
Edit: I make the assumption that we're talking about management games like Hattrick, where you're managing a roster and simulating game results rather than 2D/3D graphical simulations.
Usually timing plus a randomness to make the game replayable EDIT To clarify I mean in terms of when the pitch comes at you, if it was exact you could learn to play it perfectly, you would need a small randomness around the exact time that you swing to make the game have some chance). AI has a big part in this if you do things like curve balls, add the ability to steal bases etc.
Getting games "right" isn't a factor of design or maths so much as a feel. You will try something, play it, and see if it was fun. If it isn't try different algorithms or gameplay until you get it right.
A simulation is very much about an imagined world in that you create classes that represent all aspects of an imagined world. You need to model the players, specify the game rules, and game dynamics.
http://cplus.about.com/b/2008/05/31/nathans-zombie-simulator-in-c.htm
Look here for agent based model: http://www.montemagno.com/projects.html
One great thing about creating your own game is that you get to decide how the game logic is going to work. If you want the game to have a high degree of luck you can design that in. If you don't want the game to have a high degree of luck then you can design it out.
It's your game, you get to make up the rules.
Are you talking about a baseball game you play or a game simulator? Baseball games can be arcade-like or fantasy sports like or a blend.
I was at Dynamix when Front Page Sports Baseball was made. It was stats-based, meaning that you could play out games and seasons using the stats of the various players. That meant licensing Major League data. It used stats to influence outcomes.
There was a regular mode and a "fast-sim" mode that could breeze through the games faster.
I think Kylotan has the right strategy. Baseball has stats for everything. Simulate a game to the most detailed level you can manage. Combine player stats to determine a percentage chance for every outcome. Use randomness to decide the outcome.
For instance: The chance of a hit is based on Batting Average, Pitcher's ERA, etc. The opposing team's feilding percent determines the chance an out becomes an error.
Every stat you display to the 'manager' when selecting lineups should have some effect on gameplay - otherwise the manager is making decisions based on misleading information.
you ought to check out franchise ball. there is a browsable demo.
http://promo.franchiseball.com