I have a project to do where I have to programm a parser that parses SVG flight plans of buildings into OSM.
In OsmInEdit, you can manually edit indoor maps. I want this to be done automatically. Do you have an idea how I can access the OSM database in an automated matter? I can only find the graphical interface for manual editing.
Thank you a lot in advance!
Read and write access to the OSM database for editing purposes is available through the Editing API. This includes all OSM data, there are no specialized APIs for topics such as indoor mapping. Have a look at the relevant documentation on the OSM wiki (e.g. Simple Indoor Tagging) to understand the subset of OSM data you're interested in.
Note that the OSM project has rules for automated edits and for imports, so please don't run your software on the real instance of the database without the community's consent.
If your focus is on parsing these plans and converting them to the OSM data model, rather than the actual uploading, consider having your software output data in the OSM XML file format. This is supported by many OSM tools, such as editors, and libraries for reading and writing such files exist for most popular development platforms.
Related
For my non-commercial, low-traffic web site, I successfully use Leaflet with standard raster tile layers from well-known sources.
I'd like to add additional layers containing very localized high-resolution maps. I've succeeded in making a usable raster tile-set from such a map, hosting the tiles on my own server, and adding that as an additional layer. But this creates a huge file count. My cheap shared-hosting account promises unlimited storage but limits file (actually, inode) counts. If I add one more such tile-set, I risk getting thrown off my server.
Clearly I can look for a hosting account with higher limits, and I'm exploring Cloud alternatives, too. (Comments welcome!)
Any other ideas? Are there free or very low-cost alternatives for non-commercial ventures to use for low-traffic tile storage?
Or: As I look at the localized, high-resolution maps – I see I could fairly easily trace them to create vector artwork without much loss of data -- and some gains in clarity. I use Adobe Illustrator. Is there a reasonably painless way to get from an .ai file (or some similar vector format) to a Leaflet layer? With a substantially lower file count compared to the raster alternative?
Apologies if I've misused terminology --please correct me-- or if I've cluelessly missed some incredibly obvious way of solving this problem.
TIA
This sounds like a good use case for Leaflet.TileLayer.MBTiles plugin: (demo)
A LeafletJS plugin to load tilesets in .mbtiles format.
The idea is to write your Tiles into a single .mbtiles file (actually an SQLite database), so that you just need to host that single file on your server, instead of your thousands of individual tiles.
The drawback is that visitors now need to download the entire file before the map can actually display your tiles. But then navigation is extremely smooth, since tiles no longer need to be fetched from the network, but are all locally available to the browser.
As for generating the .mbtiles file, there are many implementations that can do the job.
Currently I am new to Moses and have trained a few sample data set provided on websites.
I am looking for more data sets to train the system.
Are these available online?
What should I be looking at while searching on google?
You can find several corpora at: http://opus.lingfil.uu.se
Also, some open-source applications include their bilingual PO files, but you have to check the license.
My advice is to build a vertical (i.e. domain-specific) MT system, rather than a generic one, to get better results. So this decision will affect which corpora you choose.
I hope this helps!
Newbie post here, so forgive me if there's a better place for this, or if my question has been answered already.
I am trying to develop an interactive trail map for my town. I have added all of the trails into the OSM database, with good topology and tags for technical difficulty, quality etc:
http://www.openstreetmap.org/#map=16/49.0843/-117.7981
I am looking to develop the map using MapBox and Tilemill. My question is: If my main goal is to symbolize the OSM highway=cycleway features based on their difficulty tags, can I skip the whole tile creation process? If so, how would I go about symbolizing the various trails based on difficulty?
If I'm not interested in a custom basemap, is there any other advantage to using MBTiles? Here is my current working MapBox map, which is using a single MBTile:
http://www.kootenaymaps.ca/wp-content/uploads/2014/03/MapBoxEXAMPLE.html
Thanks in advance for any guidance here...
Barry
I have no experience with Tilemill but AFAIK it is designed to create webmabs on tile base only. So you might use another desktop renderer as Maperitive for example. Please also keep in mind, that there are already various approaches to create hiking maps online, for GPS and printing ;).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to visualize a really huge network (3M nodes and 13M edges) stored in a database. For real-time interactivity, I plan to show only a portion of the graph based on user queries and expand it on demand. For instance, when a user clicks a node, I expand its neighborhood. (This is called "Search, Show Context, Expand on Demand" on this paper).
I have looked into several visualization tools, including Gephi, D3, etc. They take a text file as input, but I don't have any idea how they can connect a database and update the graph based on users' interaction.
The linked paper implemented a system like that, but they didn't describe the tools they were using.
How can I visualize such data with above criteria?
There are several solutions out there, but basically every one is using the same approach:
create layer on top of your source to let you query at high level
create a front end layer to talk with the level explained above
use the visualization tool you want
As miro marchi pointed, there are several solutions to achieve this goal, some of them locked to particular data sources others with much more freedom but that would require some coding skills.
Datasource
I would start with the choice of the source type: from the type of data probably I would choice either Neo4J, Titan or OrientDB (if you fancy something more exotic with some sort of flexibility).
All of them offer a JSON REST API, the former with a proprietary system and language (Cypher) and the other two using the Blueprint / Rexster system.
Neo4J supports the Blueprint stack as well if you like Gremlin over Cypher.
For other solutions, such other NoSQL or SQL db probably you have to code a layer above with the relative REST API, but it will work as well - I wouldn't recommend that for the kind of data you have though.
Now, only the third point is left and here you have several choices.
Generic Viz tools
Sigma.js it's a free and open source tool for graph visualization quite nice. Linkurious is using a fork version of it as far as I know in their product.
Keylines it's a commercial graph visualization tool, with advanced stylings, analytics and layouts, and they provide copy/paste demos if you are using Neo4J or Titan. It is not free, but it does support even older browsers - IE7 onwards...
VivaGraph it's another free and open source tool for graph visualization tool - but it has a smaller community compared to SigmaJS.
D3.js it's the factotum for data visualization, you can do basically every kind of visualization based on that, but the learning curve is quite steep.
Gephi is another free and open source desktop solution, you have to use an external plugin with that probably but it does support most of the formats out there - graphML, CSV, Neo4J, etc...
Vendor specific
Linkurious it's a commercial Neo4J specific complete tool to search/investigate data.
Neo4J web-admin console - even if it's basic they've improved a lot with the newer version 2.x.x, based on D3.js.
There are also other solutions that I probably forgot to mention, but the ones above should offer a good variety.
Other nodes
The JS tools above will visualize well up to 1500/2000 nodes at once, due to JS limits.
If you want to visualize bigger stuff - while expanding - I would to recommend desktop solutions such Gephi.
Disclaimer
I'm part of the the Keylines dev team.
We have been experimenting with using data visualisation techniques inspired by Edward Tufte to display our test suite and it has been very effective.
I would like to extend this to our Subversion Repository as I feel that there is a lot of information buried in the commit history that COULD be better represented in a graphical format.
I would like to be able to identify at a glance things like:
which modules are comparatively
stable - a lot of writing - a little
maintenance and which ones have
been written and rewritten
which modules are all one persons work and which are the work of many
Ideally I would like to annotate this information with other stuff from testing and performance tools, like:
code coverage
xref stuff like function call graph
mebbies even things like processor
utilisation under consistent load
Anybody good any good tips, examples, utilities, etc, etc...
Our shop uses mostly the mighty Erlang but we will take heart and inspiration from any source.
Check out StatSVN as an example of a Subversion statistics generator:
http://www.statsvn.org/
http://www.statsvn.org/demo/ruby/
You can try SVNPlot. It first creates a local sqlite data from the svn commit log messages. Then it uses sql queries and matplotlib to generate various graphs from it.
You can use it the sqlite database to add your custom queries and additional graphs.
(Disclaimer - I am main author of SVNPlot. Do let me know if you find it useful or if you have any suggestions on improvements)
You probably have seen codeswarm which made some headlines earlier this year when it was used to generate some cool videos of collaboration in Ruby on Rails--see the Visualizing Rails & Git blog post for a great summary and sample videos.
You might also get some ideas from history flow, which Jeff Atwood linked to in a recent Coding Horror post.