iBeacon: Particle filter extension for Fingerprinting position estimation - swift

i have implemented a full fingerprint solution in my application.
Offline phase: I can create multiple observation points and calibrate them with the mean rssi values of all the beacons in the room.
Live phase: Here I compare the actual values with the database values to get the closest position.
Now I've read that the inclusion of a particle filter can improve the accuracy of the fingerprint solution.
Does anybody know how and why can I implement this?

I assume you can use them together as complementary solutions to each other, since I'm not aware of an approach that combines both of them practically.
Here is a nice paper about using particle filters with BLE, it does discuss other approaches as well including Fingerprinting.
To comment on your question, I know that particle filters will work better when there is line of sight between the observer and beacons. On the other hand, your current solution should work with better accuracy when there is no line of sight and especially when you are already using a database to map beacon distances to your observations.
What I would do as an "extension" is to use both methods side by side, and take advantage of the database when inside known locations depending on line of sight. For example you can use particle filter inside small rooms with less obstacles, otherwise you can put a threshold for your estimation and compare it with your database value and switch to Fingerprinting when inside more obsolete or larger indoor areas.

Related

Multiple drivers with Mapbox Optimization API?

Using the Mapbox Optimization API is it possible to optimize the routes between multiple drivers?
Example: 6 locations are added, 2 drivers are added, the routes get split / optimized between the two drivers
I'm still in the planning stage, so I haven't poked around too much myself yet, but the code and all the examples I've seen are directed towards single driver optimization only... Has anybody done something like this before? Anything you can recommend to point me in the right direction?
Mapbox's Optimization API returns a duration-optimized route between the input coordinates, which is also known as solving the so-called "Travelling Salesman Problem". This is a well-known, NP-hard graph theory problem, meaning there is no general polynomial-time solution known for the problem.
The underlying data used for computing the aforementioned duration-optimized route are the cost functions of the edges connecting the coordinates input to the API request. You could retrieve the cost values (including traffic) between a set of these coordinate positions using Mapbox's Matrix API.
Adding a second driver/salesman to the problem makes the problem exponentially harder to solve, as discussed in the answer to this Stack Overflow post.
Here is a link to a scientific paper discussing a possible approach to this problem.
As evidenced by the research community, a solution for the Multiple Travelling Salesman Problem is not straightforward to implement. If you do not want to engage in this non-trivial task of implementing an algorithm that would solve it for you, you could implement a function that will make an educated guess on how to split up the destination coordinates between the two drivers. This "educated guess" could be based on values obtained from the Matrix API. You could make a one-to-many request for each driver, then take the lesser of the two durations for each coordinate and assign the coordinate to the appropriate driver. Then, you can use Mapbox's Optimization API to solve the two separate travelling salesman problems individually.
Even if you did implement an algorithm that would solve the Multiple Travelling Salesman Problem, the problem's complexity grows exponentially with the number of drivers and the number of waypoints. Therefore, you could end up with a solution that works, but would not necessarily compute in a reliable amount of time. These performance limitations are something to keep in mind when going about implementing a solution.

Analyze 3D objects by Voxels

I plan to use OpenVDB to analyze 3D objects/meshes. The objective is:
To detect object surface regions with a certain criterion, like slope
Then manipulate those regions
The manipulation might be adding other 3D objects to those regions, for example
OpenVDB has some tools available:
Conversion Tools
Filters
Topological Operations
Level Set Tools
Morphological Operations
Geometric Transforms
Compositing Tools
...
It is a large set of confusing tools to choose from. Does anybody with OpenVDB experience know:
Is OpenVDB the proper library to achieve my objective
If so, which OpenVDB tool best suits my needs
Answer provided by OpenVDB community:
An important question is what you mean by "3D objects/meshes."
OpenVDB is very good at performing those regions with surfaces by
representing them as signed distance fields. But the word "mesh"
raises some alarm bells that you may want to maintain topology. In
this case another library may be more effective.
It also sounds like you have a problem domain you are trying to
explore. For that, I would not go straight to code but instead
explore solutions using 3d applications first. My own biased first
choice would be Houdini, whose apprentice version you can get for
free. This provides most of the VDB code as separate nodes. So, for
example, you can use a File SOP to load a mesh from disk, a VDB From
Polygons to convert it to a Signed Distance FIeld, and then VDB
Analysis to compute the Gradient. The gradient I think matches what
you are looking for as slope, but it is also possible you are looking
for curvature...
To return to mesh land, you can use a VDB Convert. Finally a ROP
Geometry can save it out.
Attached is a file showing a network to compute an approximate Y-slope
as a volume, apply it back to a mesh, and save to disk.
Attached file

Classifying of hand gestures using HMMs on Matlab

I'm currently working on a project where I should classify hand gestures, many papers proposed that HMMs is the way to do so, many tutorials speak of either a weather tutorial or a dice and coin tutorial, I can't seem to understand how to map these to my problem and what should my different matrices be, I currently have a feature vector (containing the detected features of the hands as a n*2 matrix where n is the total number of features detected in all the frames, i.e. if the algorithm detected 10 features in each frame and the video is 10 frames, n would be = 100, and 2 is the x and y coordinates) and the motion vector (the motion of the hand itself in the video m*2 size where m is the number of frames in the video) also any other data u would recommend to extract from the video.
I know the papers you are talking about and the exemples about the weather are simplistic and cannot be mapped to most of the problems now processed with HMMs. In your case, you have features corresponding to hand gestures that you know. HMM can work because the data you have is dynamic, i.e. ordered in time.
My advice is that you should first have a look at the widely used HMM toolbox by Kevin Murphy. It provides all the tools you need to start working with HMMs.
The main idea is to model each gesture type with one dedicated HMM. For a given gesture type, the corresponding HMM will be trained with the available features that you have.
Once trained, you get a state transition probability matrix, an emission probability matrix and a prior for selecting the initial state.
When your have an unknown gesture, you will then compute the likelihood this gesture (its features actually) could have been generated by each of the trained HMMs. Usually, the query sequence is assigned to the category of the one raising the highest score.
This is for the big picture. In your case, you will have to find a way to represent your features as a time series. The "time" being the different frames. With a complex application such as hand gesture it might be difficult to see what each state of the model represents. Some kinds of HMM, by their topology (left-to-right models for instance) make this analogy easier.

Mapping Vision Outputs To Neural Network Inputs

I'm fairly new to MATLAB, but have acquainted myself with Simulink and Computer Vision over the past few days. My problem statement involves taking a traffic/highway video input and detecting if an accident has occurred.
I plan to do this by extracting the values of centroid to plot trajectory, velocity difference (between frames) and distance between two vehicles. I can successfully track the centroids, and aim to derive the rest of the features.
What I don't know is how to map these to ANN. I mean, every image has more than one vehicle blobs, which means, there are multiple centroids in a single frame/image. So, how does NN act on multiple inputs (the extracted features per vehicle) simultaneously? I am obviously missing the link. Help me figure it out please.
Also, am I looking at time series data?
I am not exactly sure about your question. The problem can be both time series data and not. You might be able to transform the time series version of the problem, such that it can be solved using ANN, but it is sort of a Maslow's hammer :). Also, Could you rephrase the problem.
As you said, you could give it features from two or three frames and then use the classifier to detect accident or not, but it might be difficult to train such a classifier. The problem is really difficult and the so you might need tons of training samples to get it right, esp really good negative samples (for examples cars travelling close to each other) etc.
There are multiple ways you can try to solve this problem of accident detection. For example : Build a classifier (ANN/SVM etc) to detect accidents without time series data. In which case your input would be accident images and non accident images or some sort of positive and negative samples for training and later images for test. In this specific case, you are not looking at the time series data. But here you might need lots of features to detect the same (this in some sense a single frame version of the problem).
The second method would be to use time series data, in which case you will have to detect the features, track the features (say using Lucas Kanade/Horn and Schunck) and then use the information about velocity and centroid to detect the accident. You might even be able to formulate it for HMMs.

How to adjust the Head-related transfer function (HRTF) in OpenAL or Core Audio?

OpenAL makes use of HRTF algorithms to fake surround sound with stereo headphones. However, there is an important dependency between HRTF and the shape of the users head and ears.
Simplified, this means: If your head / ears differ too much from the standard HRTF function they have implemented, the surround sound effect fades towards boring stereo.
I haven't yet found a way to adjust the various factors contributing to the HRTF algorithm, such as head diameter, pinna / external ear size, ear-to-ear distance, nose length and other important properties influencing the HRTF.
Is there any known way of setting these parameters for best surround sound experience?
I don't believe you can alter the HRTF in OpenAL. You certainly couldn't do it by putting in parametric values such as nose or pinna size. The only way to find out your HRTF is to put some very tiny, very accurate microphones in your ears, go into an anechoic chamber and take frequency response measurements at every angle around your head. Obviously this is time consuming, expensive and impractical. It would be fantastic to be able to work out your HRTF from measuring your head, but unfortunately acoustics isn't that deterministic and your ear is very sensitive to inaccuracies as you pointed out. I think the OpenAL HRTF is based on some KEMAR dummy head measurements (these perhaps?).
So, I think the short answer is that you can't alter the HRTF for OpenAL. Because HRTF is such a complex function that your ear is so sensitive to, there's no accurate way to approximate it with parametric values.
You might be able to make a "configuration game" out of optimizing the HRTF. I've been looking for an answer to the question if any of the virtual surround headsets or soundcards allow you adjust them to fit your personal HRTF.
Idea: You vary the different HRTF variables and play a sound. The user has to close his eyes and move the mouse into the direction he thought the sound came from. You measure how right he was.
You could use something like a thin plate spline or statistical curve fitting to plot the accuracy results and sample different regions of the multidimensional HRTF space to optimize the solution. This would be a kind of "brute force" method to find a solution that is not necessary accurate, but as good as the user has patience to optimize his personal HRTF.
According to a readme in the OpenALSoft sourcecode it uses a 32-sample convolution filter and you can create using custom HRTF samples.
It looks like it is now possible. I stumbled upon this comment which describes how to use hrtf_tables for approximations of your own ears. Google is showing me results for something called hrtf-paths as well but I'm not sure what that is.