Facebook Graph API - Place Distance Inaccurate - facebook

I am trying to use Facebook Graph API to retrieve a set of places at a certain coordinate.
Using the API, I executed the following query:
https://graph.facebook.com/search?access_token=APPTOKEN&type=place&center=3.187501,101.627542&distance=50000&limit=500
Theoretically, this query should return 500 places within 50km of that coordinate. However, it only returned some 15 results that are within the immediate vicinity (say few hundred meters) of that coordinate. I tried changing the distance to 10000, 5000 or even 1000, while tweaking the limit parameter to figures such as 50, 100, 1000 but the total results remain the same.
There are certainly other places nearby, i.e. if I change the query's coordinates to the following location which is less than a kilometer away, it returns a whole new result set:
https://graph.facebook.com/search?access_token=APPTOKEN&type=place&center=3.192022,101.625647&distance=50000&limit=500
Can someone please advice if my query is problematic, or that Facebook's Graph API is somehow limiting the distance or total results?
Thank you.

The fact that you only get 15 places in that kind of search was a bug on Facebook.
This bug has been solved today so your query should work better now (I actually tried and it does). Anyway here you can see more details about this bug.
What I personally don't know is this: Now your query returns around 450 results, if you limit it to 5km instead of 50km is still more or less the same amount of places.
I tried queries with different distances and coordinates, seems to me that Facebook limits your total amount of results so you never have more than around 450 places.
Even using pagination (with an offset) I can't manage to get more results (and I know there are more than 500 places within 50km around New York...)
So if you find the answer to this I would be interested ;)

I am encountering a similar problem, I have built a tool to return geo-locale results based upon keyword, city/state (or zip code) and radius up to 30 miles. What used to work flawlessly up until early December 2013 started to return only 15 results in late December. My "educated guess" is that FB is limiting results to prevent people like you and me from turning a profit from this information!!!

Related

Bing Maps REST API Snap to Route Fails Silently

I'm running into an odd problem using the snap to route feature of the Bing Maps REST API. It works great for most of the GPS coordinates I send it -- they were recorded by me on a recent motorcycle trip, so they're "contiguous" -- but fails silently for others.
As in, the returned status code is 200...but rather than returning SnapToRoadReponse objects it returns Route objects. Which lack any of the snapped-to coordinates I need.
What's particularly interesting is the problem occurs in the middle of processing the entire route. In other words, it works fine for 6 or so invocations (each with around 100 points), fails for a number of invocations, and then works fine for the remaining invocations.
Is there a rate limit on how frequently you can access the snap-to service? I'm using just a basic Bing Maps account but could program around rate limitations easily enough (e.g., by waiting beyond invocations). But I couldn't find reference to such in the documentation I reviewed.
Or maybe Bing Maps just doesn't like the hills east of Santa Rosa and the 101 corridor south from there other the Golden Gate Bridge... :)
Turns out the problem was sending too many points through the request pipeline. The Bing Maps REST API requires/strongly advises not to use GET requests involving more than 100 geographic points. I assumed the Bing Maps REST Toolkit took care of ensuring larger requests were done as POSTS. It does not, however, appear to do that.
Reducing the number of geographic points to no more than 100 per request solved the problem.
The portion of my route which was causing problems involved high speed freeway travel, which caused my code to interpolate additional points for each set of observed data points so as to ensure no two points were more than 2.5 kilometers apart (that's a Bing Maps hard limit). That drove the total number of points for each request along that stretch of the route to over 100 points, causing the problem I encountered.

qlik sense capability api 10000 limit

We've reached the limit for hypercubes and need to extract more than 10000 (data points - I used this term for lack of words to describe the individual cell that the API sends over 10000 is the max when you multiply width and height of your initial fetch) using the capability API. has anyone been able to get the next page for hypercubes? note that our requirement is for mashups not extensions.
we did a work around but it required us to break our dataset and it takes a little longer.
makes you think, since Qlik is a data analytics tool there should be a way to get all of your data. in an era where we process millions if not billions of records, 10000 data points (not even records) is miniscule.
I should also volunteer that the app we are using this for is for stock analysis and they want to see trends and require to see information on individual points as tooltips. with the number of dimensions and measures we pass (total of 7 times the number of stocks - about 20 = 140) we are constricted to only 70 days (10000/140).
we are using qliksenseserver 11.24.4
Qlik Sense November 2017 Patch 2

Grafana Singlestat Max not matching Graph with same query

I have a Singlestat panel and Graph panel that use an identical query, (Singlestat & Graph query). And, the Singlestat is set to max (Singlestat setting).
Unfortunately, the graph clearly shows a maximum greater than the max singlestat (714 vs ~800): Singlestat vs Graph. Judging from the sparklines on the Singlestat, it seems like the Singlestat's calculations are less granular than the graph's. Can anyone explain why this would be if they're using the same base query? The other singlestat functions (like Min, Avg, etc.) seem to work fine. It's just max that I'm seeing this issue with.
Note: I reviewed the other Grafana Singlestat vs Graph posts, but this appears to be a different issue.
If you take a look at the first image you linked to, you'll notice there is a Min step input, with a default value of 5m. That's where your lower resolution comes from. You may set that explicitly to your scrape interval (or less, to make sure you don't lose any samples due to jitter in the scrape interval, although that may end up being costly), but if you increase your dashboard range enough you'll:
(a) likely have a singlestat max value that's higher than anything on the graph (because your graph is now lower resolution than the singlestat source data); and
(b) will hit Prometheus' 11K samples limit if you zoom out to a range longer than 11K times the scrape interval.
Your best bet is to use PromQL to calculate the max value to display in your singlestat panel. You'll still have to deal with (a) above (low resolution graph when the range is long) but it's going to be the actual max (as much as the fact that you're actually sampling values at some fixed interval allows) and it's going to be more efficient.
Problem is that given your query -- sum(jvm_thread_count) -- there is no way of putting that into a single PromQL query with max_over_time. You'd have to define a recorded rule (something like instance:jvm_thread_count:sum = sum(jvm_thread_count) and then have your singlestat panel display the results of the max_over_time(instance:jvm_thread_count:sum[$__range_s]) instant query (check the Instant checkbox in your singlestat settings).

Averaged Historical Data from Xively feed API

The xively (Cosm) web interface issues the following function for averaged historical datapoints
// For averaged historical datapoints
https://www.xively.com/feeds/<feedId>/datastreams/Humidity/graph.json&duration=21600seconds&interval=30&limit=1000&find_previous=true&function=average
I would like to fetch averaged historical data points (That is if there are multiple samples within the interval I am asking then return an averaged rollup as representative point of the interval) using Xively REST API
However this seems to return the raw data points (They just pick one datapoint to represent the sample interval)
https://api.xively.com/v2/feeds/127181539.json?datastreams=TEMP&duration=1month&interval=21600&limit=200&function=average
So questions
1) How can I return averaged data points like the Xively web interface? what parameter is needed for feed API call?
2) Does anyone know about the parameter interval_type? I have read what is here (https://xively.com/dev/docs/api/quick_reference/historical_data/) about 50 times now but I still don't get it!
Update
function=sum as well as function=average works for
/datastreams/TEMP.json endpoint. Also, they are discrete by default.
The function=average does not works with /feeds/feed_id.json
endPoint. Maybe a Bug?
If you've got "function=average" (which you have) as a query parameter, then the points you get back should be bucketed to the interval you specified (21600 seconds / 6 hours). Each point represents the average value for that period.
It might be worth making this query against the datastreams endpoint though, e.g.
https://api.xively.com/v2/feeds/127181539/datastreams/TEMP.json?duration=1month&interval=21600&limit=200&function=average
Hope this helps!

How do you figure out what the neighboring zipcodes are?

I have a situation that's similar to what goes on in a job search engine where you type in the zipcode where you're searching for a job and the app returns jobs in that zipcode as well as in zipcodes that are 5, 10, 15, 20 or 25 miles from that zipcode, depending on preferences set by the user.
How would you calculate the neighboring locations for a zipcode?
You need to get a list of zip codes with associated longitude / latitude coordinates. Google it - there are plenty of providers.
Then take a look at this question for an algorithm of how to calculate the distance
I don't know if you can count on geonames.org to be around for the life of your app but you could use a web service like theirs to avoid reinventing the wheel.
http://www.geonames.org/export/web-services.html
I wouldn't calculate it, I would stored it as a fixed table in the database (only to change when the allocation of ZIP codes changes in a country). Make a relationship "is_neighbor_zip", which has pairs (smaller, larger). To determine whether two codes are neighboring, check in the table for specific pair. If you want all neighboring zips, it might be better to make the table symmetric.
You need to use a GIS database and ask it for ZIP codes that are nearby your current location.
You cannot simply take the ZIP code number and apply some mathematical calculations to find other nearby ZIP codes. ZIP codes are not as geographically scattered as area codes in the US, but they are not a coordinate system.
The only exception is that the ZIP+4 codes are sub-sections of the larger ZIP code. You can assume that any ZIP+4 codes that have the same ZIP code are close to each other.
I used to work on rationalizing the ZIP code handling at a company, here are some practical notes I made:
Testing ZIP codes
Hopefully has other useful info.
Whenever you create a zipcode, geocode it (e.g. google geocoder api, saving the latitude and logitude) then google the haversine formular, this will calculate the distance (as the crow flies) from a reference point, which could also be geocoded if it is a town or zipcode.
To clarify some more:
When you are retrieving records based on their location, you need to compare each longitude and latitude DECIMAL with a reference point (your users geo-coded postcode or town name)
You can query:
SELECT * FROM photos p WHERE p.long < 60 AND p.long > 50 AND p.lat > -10 AND p.lat > 10
To find all UK photos etc because the uk is between 50 and 60 degrees longitude and +-10 latitude (i might have switched long with lat, i'm fuzzy on this)
If you want to find the distance then you will need to google the haversine formula and plug in your reference values.
Hope this clears things up a little bit more, leave a comment if you need details