How to export the roadmap including traffic_signals and street_lamps of a city? - openstreetmap

I would like to get some statistics about the roads, their deployed traffic lights, and the lampposts. Is there any way to get these statistics immediately for Shenzhen (China) city?
Secondly: how can I export the road network of a specific city (i.e., Shenzhen) including traffic_signals and street_lamps?
I have tried this code using Overpass API:
[out:csv(::id,::lat,::lon)][timeout:900];
// gather results
(
node["highway"="street_lamp"](22.6242,113.6371,23.0628,114.5462);
);
// print results
out body;
The query doesn't retrieve any results for Shenzhen's(China) coordinates(22.6242,113.6371,23.0628,114.5462).However, when applying on coordinates of London(51.3941,-0.2774,51.56,0.0879), it works and retrieves.
Moreover, when I do simple query like querying PoI:
[out:json][timeout:10];
// gather results
(
node["leisure"](around: 200,22.5,113.9936701,22.6740047,113.9935278);
);
out body;
It also works although in Shenzhen(China). Any way to retrieve nodes tagged with 'street_lamp' and 'traffic_sign' in Chinese cities (i.e., Shenzhen)?

To query within boundaries, use the id of the city boundary's relation, then use map_to_area and then query with the the (area) filter:
rel(3464353);
map_to_area;
node(area)["highway"="street_lamp"];
out;

Related

How to get all of the way IDs that each node is a part of

So I am trying to build an overpass / osm query that will in effect find me all of the nodes that a part of multiple road segments, or 'ways'. I have a challenge in that I am dealing with somewhat large area (Norfolk VA, 100,000 nodes) so I'm trying to find a somewhat performant query.
This following query is useful in that it provides all of the nodes, something I need to iterate over, as any node could be part of another way:
[out:json][timeout:25];
{{geocodeArea:Norfolk, VA}}->.searchArea;
(
(
way["highway"](area.searchArea);
node(w);
);
);
// print results
out body;
>;
out skel qt;
I also found this query which returns to me every node that is a part of multiple ways. Very useful, however very non-performant query, O(n^2), and scales to an entire city very poorly.
way({{bbox}})->.always;
foreach .always -> .currentway(
(.always; - .currentway;)->.allotherways;
node(w.currentway)->.e;
node(w.allotherways)->.f;
node.e.f;
(._ ; .result;) -> .result;
);
.result out meta;
I think the minimum-useful information I need is to have all of the node IDs returned as they are associated with each way (kinda like a map/dict) but I'm really struggling to figure out if that is a possible to make such a call. Appreciate your input!

Is there a way to get results for an overpass query paginated?

Let's say I want to get restaurants in Berlin and I have this query:
[out:json];
area["boundary"="administrative"]["name"="Berlin"] -> .a;
(
node(area.a)["amenity"="restaurant"];
); out center;
Let's say this result set is too big to extract in just one request to overpass. I would like to be able to use something like SQL's OFFSET and LIMIT arguments to get the first 100 results (0-99), process them, then get the next 100 (100-199) and so on.
I can't find an option to do that in the API, is it possible at all? If not, how should I query my data to get it divided into smaller sets?
I know I can increase the memory limit or the timeout, but this still leaves me handling one massive request instead on n small ones, which is how I would like to do it.
OFFSET is not supported by Overpass API, but you can limit the number of result this is getting returned by the query via an additional parameter in the out statement. The following example would return only 100 restaurants in Berlin:
[out:json];
area["boundary"="administrative"]["name"="Berlin"] -> .a;
(
node(area.a)["amenity"="restaurant"];
); out center 100;
One approach to limit the overall data volume could be to count the number of objects in a bounding box, and if that number is too large, split the bounding box in 4 parts. counting is supported via out count;. Once the number of objects is feasible, just use out; to get some results.
node({{bbox}})["amenity"="restaurant"];
out count;

Filtering results from is_in() OverPass query

I'm new with OverPass API.
I would like to get the country element where a certain point is contained.
As a first step I tried this:
is_in(48.856089,2.29789);
out;
It gives me all the areas which contain the given coordinate, including regions, provinces...
So now, I would like to filter the country only. In the results, I can see that the country element is determined by the admin_level attribute, which must be equal to 2.
So, in order to filter my first request, I tried this:
is_in(48.856089,2.29789)[admin_level="2"];
out;
But with OverPass Turbo, it gives me the following error:
Error: line 1: parse error: ';' expected - '[' found.
I read that areas are an extended data type (compared to nodes, ways, and relations). Is it the reason why I can't filter my results?
How can I filter results from the is_in query, by [admin_level="2"]?
You cannot combine is_in with any additional filter criteria. The proper way to do this is as follows, where ._ refers to the area result returned by is_in.
is_in(48.856089,2.29789);area._[admin_level="2"];
out;

Orient SQL - Filter result set using WHERE?

I've got a bit of a semantic question about Orient SQL queries.
Take for example this very simple graph:
v(#12:1 User) --> e(#13:1 FriendOf) --> v(#12:2 User)
In other words, a given User with an rid of #12:1 is friends with another user with an rid of #12:2.
To get the friends of user #12:1, one might express this in Orient SQL like so:
SELECT EXPAND(both("FriendOf")) FROM #12:1
This query would return a result list comprised of the User with rid #12:2.
Now lets say I want to filter that result list by an additional criteria, like say a numeric value ("age"):
SELECT EXPAND(both("FriendOf")) FROM #12:1 WHERE age >= 10
The above query would filter the CURRENT vertex (#12:1), NOT the result set. Which makes sense, but is there a way to apply the filter to the EXPAND(both("FriendOf")) result rather than the current vertex? I know I can do this with gremlin like so:
SELECT EXPAND(gremlin('current.both("FriendOf").has("age",T.gte,10)')) FROM #12:1
But the above does not seem to make use of indexes (at least not when I ask it to explain). For very large data sets, this is problematic.
So is there a proper way to apply a WHERE statement to the resulting data set?
Thanks !
... is there a way to apply the filter to the EXPAND(both("FriendOf")) result rather than the current vertex?
The simple answer is to embed your basic "SELECT EXPAND ..." within another SELECT, i.e.
SELECT FROM (SELECT EXPAND(both("FriendOf")) FROM #12:1) WHERE age >= 10
By the way, on my Mac, the above took .005s compared to over 2s for the Gremlin version. There must be a moral there somewhere :-)

Openstreetmap: filter out data that have been edited after some timestamp

I want to get OSM data after some timestamp - in other words the last records after a certain timestamp. I have downloaded the osm file of the area. I went through the osmosis documentation but could not find a way to filter it by time. The result should be same as when we use the timestamp-argument. Well how to do that:
I could use the overpass but the area is large and overpass timed out many times
I could use the osmconvert-tool (cf the manual: m.m.i24.cc/osmconvert.c )
Some of the following statements might be useful for the task:
"--timestamp=<date_time> add a timestamp to the data\n"
"--timestamp=NOW-<seconds> add a timestamp in seconds before now\n"
What I have tried is the following;
./osmfilter austria-latest.osm --keep="$key=$school" |
./osmconvert - --all-to-nodes --csv="#id #lat #lon #timestamp $key name" --csv-headline |
but this fails. How to get the data out of the osm-pbf-file. Should I use the statements drop! or should i name a certain time from timestamp to timestamp!?
Since version 0.7.50 Overpass API provides a way to query for data, which changed since a given timestamp or in a given timeframe. It is even possible to restrict the change analysis to certain tags (or filter criteria). Please check the Overpass API Wiki page for more details on "diff" and "adiff" keywords.
Working with Overpass API ina way is much more convenient than trying to process a full planet history, which takes at least 35GB to download and requires more complex post-processing.
You want to process OSM history planet (extracts): https://wiki.openstreetmap.org/wiki/Planet.osm/full