Greetings fellow SO users,
I am extracting data with import.io from a site which contains city names. What I want to accomplish is to get the coordinates to each city from Nominatim and finally create/get a JSON response which contains the city name and the corresponding coordinates for each.
So I basically need to use the result from one API as the input for another (Nominatim).
Or in other words: feed a JSON list of city names to OSM's Nominatim and get back the coordinates to each city.
I wonder if this is even possible or what other options I have. Finally this would be used with leaflet to put some markers at a map.
There are tutorials for Nominatim, how to query etc. but only one query at a time. Is it even possible to query a whole list of places?
What you want to achieve in here is what we call Chained APIs. So you will need two APIs, where the input of the second one is the output of the first one.
In this case you will need some custom processing between the two APIs. From the first API you are getting the city name and from here you need to generate a list of URLs in the format http://nominatim.openstreetmap.org/search?q=CITY&format=xml one per each CITY.
After that you can use the Bulk Extract feature in import.io and pass the whole list of URLs to be queried against the API.
Related
I'm currently attempting to use the CoinMarketCap API but finding it frustrating.
I'm wanting to use this URL to query their API:
https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest
However, rather than finding all, or just simply filtering based upon the number, I want to find a certain few coins.
So for example, I want to only find Bitcoin, Ethereum and Cardano.
Looking at their docs, it suggests you can sort by name, but it appears this is only listing them alphabetically, which I don't want to do.
So can anyone suggest how to query their API successfully and find just Bitcoin, Ethereum and Cardano using that GET URL above?
Here's the URL to the specific URL for the API request: https://coinmarketcap.com/api/documentation/v1/#operation/getV1CryptocurrencyListingsLatest
For this purpose, you can use the endpoint Quotes Latest:
https://pro-api.coinmarketcap.com/v1/cryptocurrency/quotes/latest
It allows you to pass a list of identifiers in a string as a parameter, like this:
1,1027,328
or a list of slugs:
bitcoin,ethereum,monero
or a list of symbols
BTC,ETH,XMR
If you trying to scrape information about new listings at crypto exchanges, you can be interested in this API:
https://rapidapi.com/Diver44/api/new-cryptocurrencies-listings/
It includes an endpoint with New Listings, New Pairs from the biggest exchanges and a very useful endpoint with information about exchanges where you can buy specific coins and prices for this coin at that exchange. It's a bit paid, but it's worth it!
I'm pretty confused concerning this hip thing called NoSQL, especially CloudantDB by Bluemix. As you know, this DB doesn't store the values chronologically. It's the programmer's task to sort the entries in case he wants the data to.. well.. be sorted.
What I try to achive is to simply get the last let's say 100 values a sensor has sent to Watson IoT (which saves everything in the connected CloudantDB) in an ORDERED way. In the end it would be nice to show them in a D3.css style kind of graph but that's another task. I first need the values in an ordered array.
What I tried so far: I used curl to get the data via PHP from https://averylongID-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_all_docs?limit=20&include_docs=true';
What I get is an unsorted array of 20 row entries with random timestamps. The last 20 entries in the DB. But not in terms of timestamps.
My question is now: Do you know of a way to get the "last" 20 entries? Sorted by timestamp? I did a POST request with a JSON string where I wanted the data to be sorted by the timestamp, but that doesn't work, maybe because of the ISO timestamp string.
Do I really have to write a javascript or PHP script to get ALL the database entries and then look for the 20 or 100 last entries by parsing the timestamp, sorting the array again and then get the (now really) last entries? I can't believe that.
Many thanks in advance!
I finally found out how to get the data in a nice ordered way. The key is to use the _design api together with the _view api.
So a curl request with the following URL / attributes and a query string did the job:
https://alphanumerical_something-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_design/iotp/_view/by-date?limit=120&q=name:%27timestamp%27
The curl result gets me the first (in terms of time) 120 entries. I just have to find out how to get the last entries, but that's already a pretty good result. I can now pass the data on to a nice JS chart and display it.
One option may be to include the timestamp as part of the ID. The _all_docs query returns documents in order by id.
If that approach does not work for you, you could look at creating a secondary index based on the timestamp field. One type of index is Cloudant Query:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#query
Cloudant query allows you to specify a sort argument:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#sort-syntax
Another approach that may be useful for you is the _changes api:
https://console.bluemix.net/docs/services/Cloudant/api/database.html#get-changes
The changes API allows you to receive a continuous feed of changes in your database. You could feed these changes into a D3 chart for example.
I am reading this article https://cloud.google.com/storage/docs/json_api/v1/objects/list, and I didn't find it.
I have a bucket with thousands of objects (files). I want to list only object that have specific metadata.
Do you know how to achieve that?
There's no API support for server-side filter of listing results by metadata values. You would need to list all the objects and then filter at the client side. Another option, if it's possible to rename your objects, would be to construct your object names such that the metadata values on which you want to filter are built into the beginning of the object names. You could then use a prefix filter on the listing request.
I"m looking for way to get list of points of interests like airports, parks, etc. per zip code using Bing maps. Believe me, I search google but it looks like my google is broken since I can't find anything useful. I just need a way to get that info. If there is like a list that would be even better. Any help is appreciated fellows. Is there a simple URL where I can pass in my bing map key, a category and longitude/latitude and get a list of point of interests?
You could use the NAVTEQ point of interest data sources that are in the Bing spatial Data Services here: https://msdn.microsoft.com/en-us/library/hh478189.aspx
You can use the Query API to filter on the PostalCode property: https://msdn.microsoft.com/en-us/library/gg585126.aspx
Here is an example that gets points of interests that are in zip code 98004 and returns the results as XML:
http://spatial.virtualearth.net/REST/v1/data/f22876ec257b474b82fe2ffcb8393150/NavteqNA/NavteqPOIs?&$filter=PostalCode%20eq%20'98004'&$top=250&o=xml&key=YOUR_BING_MAPS_KEY
That said, make sure that your use case does not go against this restriction in the Bing Maps terms of use:
(h)Use Content that consists of points of interest data to generate
sales leads information in the form of ASCII or other text-formatted
lists of category-specific business listings which (i) include
complete mailing address for each business; and (ii) contain a
substantial portion of such listings for a particular country, city,
state or zip code region.
I want to be able to grab data from multiple tags / folders in a users Google Reader.
I know how to do one http://www.google.com/reader/atom/user/-/label/SOMELABEL but how would you do two or three or ten?
Doesn't look like you can get multiple tags/folders in one request. If it's feasible you should iterate over the different tags/folders and aggregate them in your application.
[edit]
Since it looks like you have a large list of tags/folders you need to query, an alternative is to get the full list of entries, then sort out the ones the user wants. It looks like each entry has a category element that will tell you what tag is associated with it. This might be feasible in your case.
(Source: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI)
(Source: http://www.google.com/reader/atom/user/-/state/com.google/starred)
I think you cannot get aggregated data as you hope to be able to. If you think about it, even Google lets you browse folders or tags one at a time, and do not aggregate a sub-set of them.
You can choose to have a list of all the items (for each one of their available statuses) or a list of a particular tag/folder.
You could do it in 2 requests. First you need to perform a GET request to http://www.google.com/reader/stream/items/ids. It supports several parameters like
s (required parameter; stream id to fetch; may be defined more than one time),
n (required; number of items to fetch)
r for ranking (optional)
and others (see more under /ids section)
And then you should perform a POST request (this is because there could be a lot of ids, and therefore the request could be cut off) to http://www.google.com/reader/api/0/stream/items/contents. The required parameter is i which holds the feed item identifier (could be defined more than once).
This should return data from several feeds (as returned for me).