Can I get 'way' information via OverpassAPI in offline - openstreetmap

I know the following code works well to get the 'way' information via OverpassAPI(or overpy) library.
from OSMPythonTools.overpass import Overpass
overpass = Overpass()
query = "way[highway](around:3,35.356309, 139.598544);(._;>;);out;"
result = overpass.query(query)
But I want to know whether I can get such information in offline environment or not.
I can prepare the planet.osm data from https://planet.openstreetmap.org/ in local.
How can I get 'way' information with planet.osm data in offline?
Thank you.

Related

Difference Between Local Nominatim and OpenStreetMap Website results

I have installed nominatim 4.1.0 (tokenizer= ICU) via following instructions on nominatim documentation, added wikipedia data during the installation, and imported an updated pbf file from geofabrik.de.
All works but when I sent some kind of request (e.g. Cagliari via Roma) the answer I get from Nominatim Website (https://nominatim.openstreetmap.org/) and my local installation are very different. The right results are on nominatim website of course.
The problems seems to be with the search candidate algorithm or the attribuition/calc of AdressImportance parameter.
The very strange thing is that I get these wrong results only for some requests.
There is any particular parameter to set or anything else to verify?
I hope it is clear for you and even small advice or comment would be very helpful for me
Thanks
Michele
After a discussion with the maintainers (https://github.com/osm-search/Nominatim/discussions/2839), I found an acceptable solution by editing the Geocode.php file line as follows:
$this->iLimit = $iLimit + max($iLimit, 150)
the result is not exactly the same as that of the online version but it works fine for me

IBM i Access Client Solutions - Printer Output but using an API

I want to replicate the functionality of the IBM i Access Client Solutions "Printer Output" tool that is used to retrieve PDF's of spooled files from our IBM Db2 environment. Instead of a user interface, I want to replicate the functionality as an API.
I want to construct an API which takes inputs such as the filter parameters pictured below:
The output of the API would be PDF(s) of the printer output spooled files that match the parameters specified.
I figure that if I am able to access the i Access Printer Output tool, then I should be able to use my credentials to access the spool files using an API or something like that.
Where would I start in constructing something like this?
Also, are there any IBM guides that contain relevant information? I have looked but been unsuccessful. The Programmer's Toolkit is, also, not available with my version of i Access.
Also, I don't have developer roles, so if this is possible, it would need to be something that I can do with little authority within the IBM i servers and the Access client.
First off, IBM ACS is Java based. Thus everything it does can be found in the IBM Toolbox for Java, aka JTOpen aka JT400.
http://jt400.sourceforge.net/
Documentation https://www.ibm.com/docs/en/i/7.4?topic=java-toolbox
You're going to want to look at the reading a transformed spool file example
The transformation actually happens on the IBM i side, by specifying the appropriate workstation customization object, QCTXPDF in this case rather than the examples original QWPTIFFG4
// The following examples demonstrate how to set up a PrintParameterList to
// obtain different transformations when reading spooled file data. In the code
// segments that follow, assume a spooled file already exists on a server, and
// the createSpooledFile() method creates an instance of the SpooledFile class
// representing the spooled file.
// Create a spooled file
SpooledFile splF = createSpooledFile();
// Set up print parameter list
PrintParameterList printParms = new PrintParameterList();
printParms.setParameter(PrintObject.ATTR_WORKSTATION_CUST_OBJECT, "/QSYS.LIB/QCTXPDF.WSCST");
printParms.setParameter(PrintObject.ATTR_MFGTYPE, "*WSCST");
// Create a transformed input stream from the spooled file
PrintObjectTransformedInputStream is = splF.getTransformedInputStream(printParms);

Flutter | Retrieve ffprobe data

I'm using flutter_ffmpeg package, specifically I'm trying to retrieve information regarding the chapter marks from a .m4b file which is an audiobook. By using this method:
_flutterFFmpeg.executeWithArguments(['-i', widget.book.path, '-print_format', 'json', '-show_chapters', '-loglevel', 'error']);
I was able to output this data as a JSON map in the console. The thing is, I need to use this data inside my application, is there a way to get access to those chapters as a variable using another approach, or maybe to access this data directly from the console log printed by the method shown earlier.

How to get top 400 lists from iTunes

How do I get the top 400 (or more) lists for apps from iTunes? I need the top paid, free, and grossing lists for each category and overall.
I know the rss feed exists, at https://rss.itunes.apple.com/ but that only gives you the top 200. Yet sites like AppFigures and AppAnnie have lists of the top 400 or 500, and apps in the app store will show you the top 400.
I tried the EPF feed, the popularity table only has twenty rows on it, and from other forums it looks like that feed has been unavailable for months, and it doesn't update as often as these other sites seem to anyway.
I am looking for a solution directly from Apple, not via a third party. I am 99% certain that Apple provides this data hourly, but I do not know the endpoint.
Update 12 October 2015: According to Apple Developer Support as of 9th October 2015 the issue has been resolved.
RSS feeds are indeed currently capped at 200 results (although they have been set to max 400 in the past),
Regarding the EPF relational - some services (e.g. Chomp) have relied on it in the past. I'm not sure about its current status, but if you've tried to use it make sure you get the full weekly release (which size-wise must be in the range of over 5 GBs), not just an increment release. Maybe this is the reason you get just a few rows?
Currently I don't know of other ways to get this information from Apple directly. You may try a free service from f6s or use an API provided by another paid service.
Update - Apple feedback received:
This is an interesting topic for me, so I contacted Apple yesterday and asked them is there any way to retrieve this data directly from them. This morning I received feedback on the availability of chart data from the iTunes Affiliate team at Apple. They confirmed the limitations of the RSS feed and also said the following on the EPF question:
If you are an affiliate, you could look into the EPF Relational to develop your own search results.
The EPF is a multiple-gigabyte download of the complete set of
metadata from the iTunes Store, App Store, and Mac App Store. EPF is
available for affiliates to fully incorporate aspects of the iTunes
and App Store catalogs into a website or app. This tool is only for
tech-savvy affiliates, and knowledge of relational databases setup is
required. Apple will not provide technical support for setting up or
maintaining this tool.
EPF access is only available for approved Affiliate Program
publishers. More information regarding the EPF can be found on the
Enterprise Partner Feed documentation page. Review the documentation
found there, and if you would then like access to the EPF, provide the
following information: ...
Upon further investigation of the ERPF technical documentation I found out that one of the tables in the database contains the top 1000 applications by genre:
So, you should first import the data in your own database, starting from a weekly (multi-gigabyte) release, and then apply any daily (multi-megabyte) updates available since the weekly release. According to Apple the difference between the two is:
Feed Modes
iTunes generates the EPF data in two modes:
full mode
incremental mode
The full export is generated weekly and
contains a complete snapshot of iTunes metadata as of the day of
generation. The incremental export is generated daily and contains
records that have been added or modified since the last full export.
The incremental exports are located relative to the full export on
which they are based.
Provided you've imported the data in a relational database, you should be able to get the needed data with a simple SELECT statement similar to this one:
SELECT application.title, applicationpopularityper_genre.application_rank
FROM applicationpopularityper_genre
JOIN application
ON application.application_id = applicationpopularityper_genre.application_id
WHERE applicationpopularityper_genre.genreid = XX
ORDER BY applicationpopularityper_genre.application_rank ASC;
Regarding hourly updates - by looking at the relational structure, I see that an export_date column is available. You should check if you get multiple dates for each application when executing the select above - if you do, you have data with finer granularity than a day. If not (which is more probable), and this is a dealbreaker for you, you should look at using the services of Appannie and others that I already proposed, that enrich this data with the data they get from developers via itunes connect. If you want the information free, you can try to scrape from Appannie (there are some free tools that do this, but you should know that this may not be very reliable in the long term, so you may be better off paying);
Update 2:
iTunes Affiliate Team confirmed that they are aware of the issue with this table.
Hope this answers your question.
Here's how you do it.... you can hit a URL as follows and supply an iOS5 user agent.
_IOS_DEEP_RANK_URL_BASE = 'https://itunes.apple.com/WebObjects/MZStore.woa/wa/topChartFragmentData?genreId=%s&popId=%s&pageNumbers=%d&pageSize=%d'
_IOS_DEEP_RANK_USERAGENT = 'iTunes-iPad/5.1.1 (64GB; dt:28)'
You need to set the store front too, based on what country you want.
"X-Apple-Store-Front: 143441-1,9"
Would scraping data from AppAnnie be fine?
Used phantomjs and casperjs to scrape top 500 of free, paid and grossing.
Install phantomjs and casperjs in your system
In terminal: casperjs appAnnieTop500Scraper.js
Sample Output
Free Apps
500 apps found:
// not shown: app names in json array format
// json array on file: freeTop500.json
Paid Apps
500 apps found:
// not shown: app names in json array format
// json array on file: paidTop500.json
Grossing Apps
500 apps found:
// not shown: app names in json array format
// json array on file: grossingTop500.json
appAnnieTop500Scraper.js
var free = [];
var paid = [];
var grossing = [];
var FREE_COLUMN_INDEX = 1;
var PAID_COLUMN_INDEX = 2;
var GROSSING_COLUMN_INDEX = 3;
var fs = require('fs');
var casper = require('casper').create();
casper.on("click", function() {
this.echo();
});
casper.on("page.error", function() {
this.echo();
});
function getAppListScraper(columnIndex) {
var selector = document.querySelectorAll('tbody#storestats-top-table tr td:nth-child(' + columnIndex + ') div.item-info div.main-info span.title-info');
return Array.prototype.map.call(selector, function(e) {
return e.getAttribute('title');
});
}
function printToConsole(casper, appList) {
casper.echo(appList.length + ' apps found:');
casper.echo(JSON.stringify(appList));
}
function writeToFile(fileName, content) {
fs.write(fileName, content, 'w');
}
casper.start('https://www.appannie.com/apps/ios/top/?device=iphone', function() {
// click load all button to load 500 apps list
this.click('div#load-more-box span.btn-load p a.load-all');
// wait 5000ms for the apps list to load then scrape it
this.wait(5000, function() {
free = this.evaluate(getAppListScraper, FREE_COLUMN_INDEX);
paid = this.evaluate(getAppListScraper, PAID_COLUMN_INDEX);
grossing = this.evaluate(getAppListScraper, GROSSING_COLUMN_INDEX);
});
});
casper.run(function() {
this.echo('Free Apps');
printToConsole(this, free);
writeToFile("freeTop500.json", JSON.stringify(free));
this.echo('Paid Apps');
printToConsole(this, paid);
writeToFile("paidTop500.json", JSON.stringify(paid));
this.echo('Grossing Apps');
printToConsole(this, grossing);
writeToFile("grossingTop500.json", JSON.stringify(grossing));
this.exit();
});
I know this is an old question, but I recently was faced with the same problem.
After joining the dots from many sites, my solution goes like this:
You will need this list for the genres:
https://affiliate.itunes.apple.com/resources/documentation/genre-mapping/
And this list for the country codes:
https://affiliate.itunes.apple.com/resources/documentation/linking-to-the-itunes-music-store/#Legacy
This link gives you a basic RSS overview and generator, but misses so much:
https://rss.itunes.apple.com/en-us
The next are examples I managed to piece together:
Top 100 Christian & Gospel
https://itunes.apple.com/au/rss/topsongs/genre=22/explicit=true/limit=100/xml
Or, the same one with JSON results
https://itunes.apple.com/au/rss/topsongs/genre=22/explicit=true/limit=100/json
Or, without the explicit songs:
https://itunes.apple.com/au/rss/topsongs/genre=22/limit=100/json
Top 100 CCM
https://itunes.apple.com/au/rss/topalbums/genre=1094/explicit=true/limit=100/xml
Just change the genre id, and the country code.
https://itunes.apple.com/{country code}/rss/topalbums/genre={genre code}/explicit=true/limit=100/xml

Dicominfo not giving all metadata

I have a dicom from a GE MRI scanner and there are a few pieces of information in the header I require (namely the relative position of the scan). I tried using info = dicominfo(filename) but, for some reason, this piece of information does not show up. I know that this information is saved, however. It might be a private data, but I'm not completely sure. If anyone has any information on how to resolve this issue that would be greatly appreciated.
Try using the dicomread function instead, it should be more versatile than dicominfo and it reads the information files too. If this doesn't work then it means that the information you are trying to obtain is not made available by GE.
Or use gdcm to dump the private GE header:
$ gdcmdump --pdb input.dcm