I have installed nominatim 4.1.0 (tokenizer= ICU) via following instructions on nominatim documentation, added wikipedia data during the installation, and imported an updated pbf file from geofabrik.de.
All works but when I sent some kind of request (e.g. Cagliari via Roma) the answer I get from Nominatim Website (https://nominatim.openstreetmap.org/) and my local installation are very different. The right results are on nominatim website of course.
The problems seems to be with the search candidate algorithm or the attribuition/calc of AdressImportance parameter.
The very strange thing is that I get these wrong results only for some requests.
There is any particular parameter to set or anything else to verify?
I hope it is clear for you and even small advice or comment would be very helpful for me
Thanks
Michele
After a discussion with the maintainers (https://github.com/osm-search/Nominatim/discussions/2839), I found an acceptable solution by editing the Geocode.php file line as follows:
$this->iLimit = $iLimit + max($iLimit, 150)
the result is not exactly the same as that of the online version but it works fine for me
I am new to working with Maps and search algorithms. Currently I am using geopy package to get distances from Nominatim
from geopy.geocoders import Nominatim
from geopy.distance import vincenty
nom = Nominatim()
chicago = nom.geocode("chicago")
dallas = nom.geocode("dallas")
chicago_gps = (chicago.latitude, chicago.longitude)
dallas_gps = (dallas.latitude, dallas.longitude)
distance = vincenty(chicago_gps, dallas_gps).km
print('Distance in kms: {}'.format(distance))
print(chicago.raw)
output
Distance in kms: 1294.7623005649557
{'lat': '41.8755546', 'osm_id': '122604', 'boundingbox': ['41.643919', '42.0230219', '-87.940101', '-87.5239841'], 'licence': 'Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright', 'lon': '-87.6244212', 'place_id': '178038280', 'class': 'place', 'icon': 'https://nominatim.openstreetmap.org/images/mapicons/poi_place_city.p.20.png', 'osm_type': 'relation', 'importance': 0.29566190262222, 'display_name': 'Chicago, Cook County, Illinois, United States of America', 'type': 'city'}
So for each place I can calculate the distance. Now there are few questions
Is it an airline distance ? Also does OSM provide duration of the journey like Google does ?
How can I get directions if I want to go from "Chicago" to "Dallas" like google ? Is there way we get the routing directly from OSM apart from using APIs MapQuest etc ?
How can we implement traffic layers in our model ? I need some good resources in that and if there are any python implementations of that it would be great.
Is it an airline distance?
Yes, see the geopy documentation on distance calculation. geopy doesn't support real routing at the moment.
Also does OSM provide duration of the journey like Google does?
Yes it does if you use a real router. Take a look at OSM-based online routers. Several of them, such as GraphHopper and OSRM, provide turn-by-turn instructions.
How can I get directions if I want to go from "Chicago" to "Dallas" like google ? Is there way we get the routing directly from OSM apart from using APIs MapQuest etc?
See my previous answer. Use the API of one of the many online routers. Alternatively run your own routing instance. Many of these routers are open source and can be installed locally.
How can we implement traffic layers in our model ? I need some good resources in that and if there are any python implementations of that it would be great.
Can't help you with that. I would start by taking a look at http://opentraffic.io/ and https://github.com/graphhopper/open-traffic-collection.
When trying to use the recipe to visualize data from my device which is registered to the IoT Foundation I am not seeing any data on the graph.
When I try and publish the following:
myData={'name' : 'RPi', 'temp' : temp, 'relh' : relh}
... I get a blank visualization
Send the data in JSON format (the visualization displays numerical data):
myData={'d': {'size': 54, 'temp': 34}}
Documentation on this format can be found here: https://docs.internetofthings.ibmcloud.com/messaging/payload.html
How do I get the top 400 (or more) lists for apps from iTunes? I need the top paid, free, and grossing lists for each category and overall.
I know the rss feed exists, at https://rss.itunes.apple.com/ but that only gives you the top 200. Yet sites like AppFigures and AppAnnie have lists of the top 400 or 500, and apps in the app store will show you the top 400.
I tried the EPF feed, the popularity table only has twenty rows on it, and from other forums it looks like that feed has been unavailable for months, and it doesn't update as often as these other sites seem to anyway.
I am looking for a solution directly from Apple, not via a third party. I am 99% certain that Apple provides this data hourly, but I do not know the endpoint.
Update 12 October 2015: According to Apple Developer Support as of 9th October 2015 the issue has been resolved.
RSS feeds are indeed currently capped at 200 results (although they have been set to max 400 in the past),
Regarding the EPF relational - some services (e.g. Chomp) have relied on it in the past. I'm not sure about its current status, but if you've tried to use it make sure you get the full weekly release (which size-wise must be in the range of over 5 GBs), not just an increment release. Maybe this is the reason you get just a few rows?
Currently I don't know of other ways to get this information from Apple directly. You may try a free service from f6s or use an API provided by another paid service.
Update - Apple feedback received:
This is an interesting topic for me, so I contacted Apple yesterday and asked them is there any way to retrieve this data directly from them. This morning I received feedback on the availability of chart data from the iTunes Affiliate team at Apple. They confirmed the limitations of the RSS feed and also said the following on the EPF question:
If you are an affiliate, you could look into the EPF Relational to develop your own search results.
The EPF is a multiple-gigabyte download of the complete set of
metadata from the iTunes Store, App Store, and Mac App Store. EPF is
available for affiliates to fully incorporate aspects of the iTunes
and App Store catalogs into a website or app. This tool is only for
tech-savvy affiliates, and knowledge of relational databases setup is
required. Apple will not provide technical support for setting up or
maintaining this tool.
EPF access is only available for approved Affiliate Program
publishers. More information regarding the EPF can be found on the
Enterprise Partner Feed documentation page. Review the documentation
found there, and if you would then like access to the EPF, provide the
following information: ...
Upon further investigation of the ERPF technical documentation I found out that one of the tables in the database contains the top 1000 applications by genre:
So, you should first import the data in your own database, starting from a weekly (multi-gigabyte) release, and then apply any daily (multi-megabyte) updates available since the weekly release. According to Apple the difference between the two is:
Feed Modes
iTunes generates the EPF data in two modes:
full mode
incremental mode
The full export is generated weekly and
contains a complete snapshot of iTunes metadata as of the day of
generation. The incremental export is generated daily and contains
records that have been added or modified since the last full export.
The incremental exports are located relative to the full export on
which they are based.
Provided you've imported the data in a relational database, you should be able to get the needed data with a simple SELECT statement similar to this one:
SELECT application.title, applicationpopularityper_genre.application_rank
FROM applicationpopularityper_genre
JOIN application
ON application.application_id = applicationpopularityper_genre.application_id
WHERE applicationpopularityper_genre.genreid = XX
ORDER BY applicationpopularityper_genre.application_rank ASC;
Regarding hourly updates - by looking at the relational structure, I see that an export_date column is available. You should check if you get multiple dates for each application when executing the select above - if you do, you have data with finer granularity than a day. If not (which is more probable), and this is a dealbreaker for you, you should look at using the services of Appannie and others that I already proposed, that enrich this data with the data they get from developers via itunes connect. If you want the information free, you can try to scrape from Appannie (there are some free tools that do this, but you should know that this may not be very reliable in the long term, so you may be better off paying);
Update 2:
iTunes Affiliate Team confirmed that they are aware of the issue with this table.
Hope this answers your question.
Here's how you do it.... you can hit a URL as follows and supply an iOS5 user agent.
_IOS_DEEP_RANK_URL_BASE = 'https://itunes.apple.com/WebObjects/MZStore.woa/wa/topChartFragmentData?genreId=%s&popId=%s&pageNumbers=%d&pageSize=%d'
_IOS_DEEP_RANK_USERAGENT = 'iTunes-iPad/5.1.1 (64GB; dt:28)'
You need to set the store front too, based on what country you want.
"X-Apple-Store-Front: 143441-1,9"
Would scraping data from AppAnnie be fine?
Used phantomjs and casperjs to scrape top 500 of free, paid and grossing.
Install phantomjs and casperjs in your system
In terminal: casperjs appAnnieTop500Scraper.js
Sample Output
Free Apps
500 apps found:
// not shown: app names in json array format
// json array on file: freeTop500.json
Paid Apps
500 apps found:
// not shown: app names in json array format
// json array on file: paidTop500.json
Grossing Apps
500 apps found:
// not shown: app names in json array format
// json array on file: grossingTop500.json
appAnnieTop500Scraper.js
var free = [];
var paid = [];
var grossing = [];
var FREE_COLUMN_INDEX = 1;
var PAID_COLUMN_INDEX = 2;
var GROSSING_COLUMN_INDEX = 3;
var fs = require('fs');
var casper = require('casper').create();
casper.on("click", function() {
this.echo();
});
casper.on("page.error", function() {
this.echo();
});
function getAppListScraper(columnIndex) {
var selector = document.querySelectorAll('tbody#storestats-top-table tr td:nth-child(' + columnIndex + ') div.item-info div.main-info span.title-info');
return Array.prototype.map.call(selector, function(e) {
return e.getAttribute('title');
});
}
function printToConsole(casper, appList) {
casper.echo(appList.length + ' apps found:');
casper.echo(JSON.stringify(appList));
}
function writeToFile(fileName, content) {
fs.write(fileName, content, 'w');
}
casper.start('https://www.appannie.com/apps/ios/top/?device=iphone', function() {
// click load all button to load 500 apps list
this.click('div#load-more-box span.btn-load p a.load-all');
// wait 5000ms for the apps list to load then scrape it
this.wait(5000, function() {
free = this.evaluate(getAppListScraper, FREE_COLUMN_INDEX);
paid = this.evaluate(getAppListScraper, PAID_COLUMN_INDEX);
grossing = this.evaluate(getAppListScraper, GROSSING_COLUMN_INDEX);
});
});
casper.run(function() {
this.echo('Free Apps');
printToConsole(this, free);
writeToFile("freeTop500.json", JSON.stringify(free));
this.echo('Paid Apps');
printToConsole(this, paid);
writeToFile("paidTop500.json", JSON.stringify(paid));
this.echo('Grossing Apps');
printToConsole(this, grossing);
writeToFile("grossingTop500.json", JSON.stringify(grossing));
this.exit();
});
I know this is an old question, but I recently was faced with the same problem.
After joining the dots from many sites, my solution goes like this:
You will need this list for the genres:
https://affiliate.itunes.apple.com/resources/documentation/genre-mapping/
And this list for the country codes:
https://affiliate.itunes.apple.com/resources/documentation/linking-to-the-itunes-music-store/#Legacy
This link gives you a basic RSS overview and generator, but misses so much:
https://rss.itunes.apple.com/en-us
The next are examples I managed to piece together:
Top 100 Christian & Gospel
https://itunes.apple.com/au/rss/topsongs/genre=22/explicit=true/limit=100/xml
Or, the same one with JSON results
https://itunes.apple.com/au/rss/topsongs/genre=22/explicit=true/limit=100/json
Or, without the explicit songs:
https://itunes.apple.com/au/rss/topsongs/genre=22/limit=100/json
Top 100 CCM
https://itunes.apple.com/au/rss/topalbums/genre=1094/explicit=true/limit=100/xml
Just change the genre id, and the country code.
https://itunes.apple.com/{country code}/rss/topalbums/genre={genre code}/explicit=true/limit=100/xml
I want to export a Munich map from OSM for SUMO Simulator. I've managed to download such a map from bbbike.de(472MB), but when I'm converting it to .net.xml with netconvert I'm getting a lot of warnnings and the simulator cannot import the map ("Loading error"). Do you have any idea how can I convert correctly the map (or do you think it is because the big volume of the file?), or where could I get such a map(in a properly sumo format = xml, I acctualy need just the highways, I've tried to select just highways with osmosis but I have the same problem in the end)?
Ok to summarize our experience and to close this question :)
SUMO has problems with parsing big full OSM files. So you need to cut the area and filter for highways with osmosis:
Get the state of bavaria or of the surrounding http://download.geofabrik.de/europe/germany/bayern/oberbayern.html
osmosis --read-pbf ./oberbayern-latest.osm.pbf --bounding-box top=48.3298 left=11.2699 bottom=48.0460 right=11.8948 --write-xml ./munich.xml
osmosis --read-xml ./munich.xml --tf accept-ways highway=* --used-node --write-xml ./munich_streets.xml
This can be loaded to SUMO, esp. with the eWorld GUI.