Mapbox DIrection API : inversion between walking and cycling - mapbox

Hello on a service which uses the Mapbox Direction API (in France), I noticed a problem. There is an inversion between the results offered for walking and cycling directions : they are reversed.
I thought it was a reversal in the implementation of mapbox API. To better understand the problem, I tested directly on the mapbox page (https://docs.mapbox.com/playground/directions/) and in fact it seems to be a problem at the level of the API itself : in any case on French territory, the results of walking and cycling are swapped ... thank you in advance for taking my request into consideration.
Best regards.

Could you expand your question with an actual request that does not produce the expected result?
One reason for the confusion may be that the biking profile can return walking instructions in cases where you are not allowed to bike due to restrictions.

Related

iOS image comparison

I am just doing some research into image processing and would appreciate it if someone could point me in the right direction. I want to compare image 'A' which is a picture of a person's face with image's stored in a database -B,C,D,E .. etc which are also pictures of faces. I want to compare them to see if the person 'A' is already in the database.
Several questions :
1.How is face recognition comparison usually done? (do you extract features e.g. eyes/mouth and compare them to other images?).
2. Are there prebuilt libraries that are able to do a comparison between images? or do i need to write my own algorithm?
3. Where can i start with this? (would appreciate some references/reading material).
Yes, you identify, extract and quantify various aspects of human faces, such as distance between pupils, width of mouth, percentage of head height where tip of nose is, etc.
There is a company, Luxand which makes software to do this, and I think they license it. Last time I looked (2009?) they didn't have an objective-c library. They do have an app that claims to merge faces from photograhs, so you can see what the offspring of any two people would look like, but it is very cheesy, with lots of hard-coded faces. (If you cross a dog with a tea-pot, you get the same baby-face as from crossing a 2 real faces.)
AFAIK, there is nothing in the iOS SDK that does this.
I would just Google "face recognition" and start reading. Good luck.
I would go with compiling openCV for the iPhone ( http://computer-vision-talks.com/2011/02/building-opencv-for-iphone-in-one-click/ ), and then implementing one of the classical ways to do face recognition like eigenfaces ( http://www.shervinemami.info/faceRecognition.html )
But don't expect miracles the accuracy will be low, and the app will be slow.
Also when you say face recognition is difficult doesn't the first link show how easy it is to detect faces on a picture?
The face detection from the first link is just to detect the face. It is just to see if there is a face in the image, which then you can pass as input to the recognition algorithm.
face recognition are very difficult, you need to extract some kind of "features" and perform some measurement...iphone hardware isn't very appropriate for this job.
yes, you can check here
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
for a tutorial and here
http://maniacdev.com/2011/12/open-source-library-for-adding-easy-face-to-your-ios-app-with-the-free-face-com-api/
for a free webservice.
3.i suggest you google scholar (http://scholar.google.it/scholar?q=face+recognition&hl=it&btnG=Cerca&lr=) but i think that if you want to write your own algorithm you need a lot o spare time :)

matlab object detection and tracking

I m doing a research project on "Object detection using my a digital camera".
Some suggestion on how to build and program the Matlab code.
In particular, I have a picture of one object, say a screen of my laptop. Than I rotate the laptop and I shot a new picture. I would like to know the difference on the position of the screen. I think I can use the edge detection after a subtraction of the two images but... it is quite difficult for me to implement it.
Some suggestion on how to build and program the matlab code.
That largely depends on the goal you want to achieve. Can you be more specific? Are you streaming the frames or are you tracking offline?
In particular, i have a picture of one object, say a screen of my laptop. Than i rotate the laptop and i shot a new picture. I would like to know the difference on the position of the screen.
There are many ways to do this, and an extensive litterature on the subject. I don't believe anyone would write up the equivalent of a survey paper on the subject as an answer on StackOverflow. Why don't you get started with an object tracking survey paper and then ask a more precise question?
hi, I m doing a reasearch project on "Object detection using my a digital camera". [...] I think i can use the edge detection after a subtraction of the two images but...is quite difficult for me to implement it.
What is your question? Are you asking us if this is a good way to track objects? Are you asking us if this is a new approach and has never been done? Are you asking someone to implement it for you?
Object tracking is a hard problem. I doubt that technique would succeed in any but the most basic scenarios. However, if you look at a survey paper, you might be pointed to a paper that already implemented this an presents results. Finally, I think you should brush up your programming skills because most (successful) object tracking techniques are not trivial to implement. If you don't want to program it yourself, there are online services where you can hire people. StackOverflow is not one of those places.
EDIT: I could deduce that you're new to both programming (in MATLAB) and in object tracking, hence in my answer. Don't mis-understand me, I'm trying to help. Let me re-phrase my suggestions as list:
Your question is far too general. You will get a lot more help from the SO community if you ask more precise questions for two reasons: A) general question result in general answers; and B) the way you asked your question could easily be interpreted as "someone, please do my work for me" even if that's not what you think you're asking.
Get acquainted with the problem domain. To ask more precise questions, you must be close to your answer. For good knowledge on the "object detection and tracking", find a good survey paper. If you're starting off on a research project, people in your lab should be of help to point you to a good one.
Learn to program simple things first. All of the most proficient (effective and efficient) programmers I've ever met struggled with the bubble sort when they were introduced to sorting. None of them would have been able to program an object detection algorithm as a first assignment. Get yourself a good image processing book that has exercises in MATLAB, go through execises one by one. If you can't do them all, choose those that are relevant to what you're trying to accomplish.

What content have you made/seen made using procedural techniques

I was looking at some study i have to do in the future to do with procedural generation techniques and i was wondering what type of content you have:
Developed
Helped Develop
Seen implemented
Tried to develop
and what methods/techniques/procedures you used to develop it.
If you feel generous maybe you can even go into specifics of it such as data structures ad algorithms you have used to develop it.
If this needs to be put as community wiki because it is not me asking for a problem to be solved just let me know.
This is not a homework thread because it is a research unit that i'm not taking yet ;)
Introversion software, the makers of the games Defcon, Uplink and Darwinia (among others) have started working on a game about a year ago which extensively uses PCG for city generation, here is a video of their work, and you can read more about it on the development diary of the game (start from the first part at the bottom of the page!).
This immediately got me extremely interested, and seeing the potential for games I immediately started researching the technology. I have amassed a folder of 18 PDFs about the subject (research papers, SIGGRAPH presentations, etc). Here, I uploaded it for you.
The main approach is to use L-Systems, however, I never got around to understanding enough of that to make something out of this. I tried other, less successful approaches like using Voronois, recursively splitting a rectangular area into more smaller areas and shifting the boundaries a little to obtain a bit of randomness and polygon division.
The last method I had gotten from Mike's Code Blog's posts (here and here). The screenshots shown on his blog make me drool, it is my biggest programmer's dream to ever get something that looks like that. I emailed him to ask how he did it, and here is the relevant part of his reply, I'm sure he wouldn't mind me posting this here:
L-Systems is definitely one way to go, but that isn't what I'm doing. The basis of my method is polygon subdivision. I start with a simple polygon that represents the entire area of the city. Then, I split it (roughly) in half, and then split those two polygons, etc. until I get down to city-block size. At that point, the edges of all my polygons represent roads. I then use the same subdivision method to break the blocks down into building-size lots.
The devil is in the details, of course, but that is the basic method.
I for one still haven't managed to fully implement a solution of which I'm satisfied of, but it remains one of, if not my single biggest programmer's dream to ever achieve something like this.
Here are a few of the leaders in procedurally generated terrain (and to a lesser extent foliage). If you don't get a detailed answer here regarding methods and techniques, you might want to look in / ask in their forums. I have seen some discussions of techniques there.
TerraGen 2
World Builder
World Machine
Natural Graphics
Noone mentioned the demoscene that ONLY use procedural stuff?
So, go search for Werkkzeug, Kkrieger, MilkyTracker to start. Also you can visit the site pouet and see the wonder of well done procedural videos (yes, procedural videoclips! With music and graphics, all procedural!)
Allegorithmic's products are used in actual shipping titles. These guys focus on texture generation (both offline and at runtime).
They have some very pretty screenshots and demos.

Largest possible group of friends in common?

I'm trying to come up with the largest possible group of friends that would theoretically get along with each other, i.e., each person in the group should know at least 50% of the other people in the group.
I'm trying to come up with an algorithm for this that doesn't take ridiculously long; Facebook's API/cross-server talk is pretty slow as is.
I was thinking I could start with the friend that has the most mutual friends with me first, and then add people to the group one by one. But who would I choose next?
Just interested in the theory, no code is necessary.
Edit: When I said "theory", what I really meant what's the next logical step in plain english :) I was hoping I could code this up in an afternoon, but I guess this is a bit more complicated than I anticipated, and I'm not sure I want to spend weeks delving into heavy graph theory. Nevertheless, maybe someone else will find this interesting.
MIT did some work on social graphing a while back. Although it used mobile phone data, the clustering algorithms and other systems should still apply, even though they are constructed using different inputs and criteria.
There is more MIT chatter about social graphing going on at the moment. Definitely the place to look for technical pointers on this kind of thing.
Whilst the problem of graph enumeration from a given node to it's edges is NP complete for most useful problems ... the application of the graph traversal and the wealth of information might help you make this more efficient:
For any node (profile) N, you could data-scrape using Google or something to find associated edges out. This means that you can harness a cache of the pages and Googles search technology to mitigate having to traverse the edges yourself.
Social profiles contain tons of meta-data. Developing a statistical analysis method for working out the likelyhood of A knowing B without a direct path might be useful. Afterall friends have a) similar locations and b) similar interests
Other data, seemingly irrelevant can provide a means for locating people likely to know eachother and then you can double check the edges. Things such as chatter on boards about a band or gig, or people mentioning "cat fight" when Kate smacked Mary in the mouth.
The data just needs looking at in the right way, in the same way MIT looked at geographical statistics to determine relationships through phones.
Good Luck
There is an Algorithm called SCAN-Algorithm with some precalculations the algorithm can cluster a network in a good speed.
You can find informations about the algorithm here: SCAN: A Structural Clustering Algorithm for Networks
This is more "broad", but see if it helps to get ideas.

Water simulation with a grid

For a while I've been attempting to simulate flowing water with algorithms I've scavenged from "Real-Time Fluid Dynamics for Games". The trouble is I don't seem to get out water-like behavior with those algorithms.
Myself I guess I'm doing something wrong and that those algorithms aren't all suitable for water-like fluids.
What am I doing wrong with these algorithms? Are these algorithms correct at all?
I have the associated project in bitbucket repository. (requires gletools and newest pyglet to run)
Voxel-based solutions are fine for simulating liquids, and are frequently used in film.
Ron Fedkiw's website gives some academic examples - all of the ones there are based on a grid. That code underpins many of the simulations used by Pixar and ILM.
A good source is also Robert Bridson's Fluid Simulation course notes from SIGGRAPH and his website. He has a book "Fluid Simulation for Computer Graphics" that goes through developing a liquid simulator in detail.
The most specific answer I can give to your question is that Stam's real-time fluids for games is focused on smoke, ie. where there isn't a boundary between the fluid (water), and an external air region. Basically smoke and liquids use the same underlying mechanism, but for liquid you also need to track the position of the liquid surface, and apply appropriate boundary conditions on the surface.
Cem Yuksel presented a fantastic talk about his Wave Particles at SIGGRAPH 2007. They give a very realistic effect for quite a low cost. He was even able to simulate interaction with rigid bodies like boxes and boats. Another interesting aspect is that the boat motion isn't scripted, it's simulated via the propeller's interaction with the fluid.
(source: cemyuksel.com)
At the conference he said he was planning to release the source code, but I haven't seen anything yet. His website contains the full paper and the videos he showed at the conference.
Edit: Just saw your comment about wanting to simulate flowing liquids rather than rippling pools. This wouldn't be suitable for that, but I'll leave it here in case someone else finds it useful.
What type of water are you trying to simulate? Pools of water that ripple, or flowing liquids?
I don't think I've ever seen flowing water ever, except in rendered movies. Rippling water is fairly easy to do, this site usually crops up in this type of question.
Yeah, this type of voxel based solution only really work if your liquid is confined to very discrete and static boundaries.
For simulating flowing liquid, do some investigation into particles. Quite alot of progress has been made recently accelerating them on the GPU, and you can get some stunning results.
Take a look at, http://nzone.com/object/nzone_cascades_home.html as a great example of what can be achieved.