Best book for learning sensor fusion, specifically regarding IMU and GPS integration [closed] - accelerometer

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have a requirement of building an Inertial Measurement Unit (IMU) from the following sensors:
Accelerometer
Gyroscope
Magnetometer
I must integrate this data to derive the attitude of the sensor platform and the external forces involved (eg. subtract tilt from linear acceleration).
I must then use this information to compliment a standard GPS unit to provide higher consistent measurements than can be provided by GPS alone.
I do understand the basic requirements of this problem:
Integrate sensors. (To cancel noise, subtract acceleration).
Remove noise. (Kalman filter)
Integrate IMU measurement into GPS.
Whilst there are various libraries currently around that would do this for me (http://code.google.com/p/sf9domahrs/) I need to understand the mechanisms involved to a level where I am able to explain the techniques to other individuals after I have implemented the solution.
I have been looking at the following resources, but I am unsure which I should go for...
I need something covering Sensor Fusion, Filtering, IMU, Integration.
Multisensor-Fusion-Integration-Intelligent-Systems
Positioning-Systems-Inertial-Navigation-Integration
Mechatronics-Intelligent-Systems-Off-road-Vehicles
Autonomous-Flying-Robots-Unmanned-Vehicles
I hope someone experienced in this area can provide any recommendations.
Many thanks.

I have implemented sensor fusion for the Shimmer platform. These have been a big help:
An introduction to inertial navigation
An Introduction to the Kalman Filter
Pedestrian Localisation for Indoor Environments

Related

Is there an open-source heat exchanger library written in Modelica? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Is there an open-source heat exchanger library written in Modelica which covers different types of heat exchangers? Like on this web page https://en.wikipedia.org/wiki/Heat_exchanger
Based on your previous posts and your Wikipedia link I assume that you need heat exchanger models for power plant simulation.
The one in MSL (Modelica.Fluid.Examples.HeatExchanger) is a generic, discretized model, not calculating heat transfer/pressure drop from plate or tube/shell geometry or condensation/evaporation as can be found in various norms and textbooks. However, you can extend the model to include e.g. number of tube tiers/rows etc. and you can add your own two-phase heat transfer models based on Modelica.Fluid.Pipes.BaseClasses.HeatTransfer.PartialFlowHeatTransfer. But this requires a bit of work from your side.
Otherwise, there are a number of freely available options — among others:
ClaRa library by XRG/TLK Thermo (www.claralib.com) is extremely capable and aimed for power plant simulation. Note that the version you can find on GitHub is not the official library maintained by the original authors.
ThermoPower by Francesco Casella (github.com/casella/ThermoPower) has been around for many years and its models are well proven and documented.
Modelica Buildings Library (github.com/lbl-srg/modelica-buildings) has a number of different heat exchanger models that are all very easy to use but are without geometrical information. This library uses MSL fluid connectors.

I'm Interested in Virtual Reality [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Lately I'm interested in VR and I decided to learn more about it and technologies beyond this.
I also have a background of Mathematic science from my university field.(Applied Math.)
Can anyone suggest me some article or book or anything to learn more?
Free references preferred.
Thank you.
I would suggest you start by actually making things in VR. To read about it and to experience it are completely different things. I personally work with Oculus Rift. It is easy to develop for it because of its integration with various game engines. But as a cheaper option you could get started with Google Cardboard.
If you want to read up about the core technology behind VR, I would suggest start by reading about Computer Graphics. After you have a grip on that you can start to explore other topics for example how tracking works, issues of FOV etc.
For readings about Graphics, I would suggest you have a look at this thread.
https://gamedev.stackexchange.com/questions/12299/what-are-some-good-books-which-detail-the-fundamentals-of-graphics-processing

Using Markov chains for procedural music generation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Does anyone know of an online resource where I can find stochastic matrices for an nth order Markov chain describing the probability of a note being played based on the previous n notes (for different musical genres, if possible)? I am looking for something similar to the second-order matrix found on this page: http://algorithmiccomposer.com/2010/04/openmusic-markov-chains-and-omlea.html
If not, or otherwise, what would be the best way to construct such a matrix for each genre? The article states that this can be done by hand or by analysing existing pieces of music. How could large amounts of music for each genre be processed to generate these matrices?
I have been doing research on this topic. The matrix you are looking for is highly dependant upon what kind of music you want to generate.
One of the people I work with wrote this paper that is the method used for this. It is based on using viewpoint to look at the music and then basically creating a transition matrix for all these viewpoints: http://www.ehu.es/cs-ikerbasque/conklin/papers/jnmr95.pdf
You can contact me if you need more specific info or collaboration.

What is the best ACR (auto content recognition) technology for building a second screen television app with? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What is the best ACR (auto content recognition) technology for building a second screen television app with ?
Potential solutions may include: tvsync, tvtak.tv, civolution, and audible magic.
Civolution primarily provides watermarking technology and Audible Magic provides digital fingerprinting. Watermarking is good for forensics but Fingerprinting is better suited for second screen applications. TVtak requires you to use the camera on the phone which might be less convenient for users to use. Both Civolution and Audible Magic listen via the microphone. TVsync is new to the market and is unproven. Audible Magic has probably been around the longest and owns many patents which gives them a significant advantage.
With watermarks, a barely detectible tone needs to be inserted into the original content during production. That is not the case with fingerprinting.
As pointed out above, there is no straight forward answer on "what is the best ACR ?" as it depends on the requirements and what use case you are trying to cover and what content you are trying to deliver. We at mufin offer a audio fingerprinting technology that is very robust and is optimized for mobile applications that do not necessarily require an internet connection. Compared to watermarking, audio fingerprinting does not require a modification of the reference audio signal which is one of the main advantages.
you may check syntec tv (audio fingerprinting solution) for what they offers. It is really efficient and fast recognition which they can provide.

Music pitch affecting a game [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In windows media player, do you know that music visualization graph that changes based on frequency and pitch? What I want is to implement this into an iphone game.
I'll try to explain this as well as I can. I will be playing classical music in a game. I want to use the music's volume/pitch/whatever it is called, to affect gameplay. Like, if suddenly in the music, the volume raises (not the volume of the iphone, but the actual playing of the music) it would increase the chances of a spawn or something.
I'm not asking for a guide on how to implement this, I want to know if there is something that can give me numbers or something based on the pitch/volume/high and low notes of the song that was playing in a game.
Oh and if anyone can tell me what the name of the music graph I am looking for, it would be greatly appreciated.
This sample shows how to do what you want to do. The visualizer in WMP uses the amplitude (volume) of the signal as well as frequency information (using Fast Fourier Transform - probably) to construct the visualization effect.
You can also use the simpler AVAudioPlayer API, if you're interested in just responding to the music's current volume level (and if you want to skip the frequency analysis part). The API includes a callback that notifies your app periodically of the current audio volume.