How to present geographical and other data on the web - presentation

I've got a table that I'd like to present. However, a lot the information in it is only useful in aggregated or visual form.
For example, the country column it itself is boring, but a aggregating all the entries of a country would be really useful. Coordinates are in there as well, so any solution should be able to present stuff on a map.
Note that the solution can be non-web, but I'd really prefer a web application everyone can access. What I've found so far is just the Google Maps API, but that's not very good at showing non-geographical information, is it?
Note that the table has a lot of dimensions, often nominal or ordinal (i.e. no numbers), so visual and plotting-focussed libraries are not that good.

EDIT: maybe that would help you, in absence of other answers
Today, this article popped into my RSS reader: Patterns of Destruction?: Visualizing Earthquake Data w/Tableau.
The author uses Tableau to visualize his data and mentions also Data Applied and GoodData.
Combine the Google Maps API with something like the Javascript Visualization Toolkit?
There are may libraries out there that might do the trick as well:
Raphael
Axis
...

Related

Lighting diagrams language and notation

I know, the use case might be specific but more and more stuff in all industry sectors is digitalized—and so is the communication between different departments which sometimes talk in very different languages. I searched the internet, but I wasn't able to find a clear answer (either I didn't find the right search phrases or the internet itself just doesn't know).
Here's my scenario: I'm working with several departments which work with diagrams (for example a lighting setup). This diagram solves different purposes:
which devices are used?
where are they placed?
where are they pointing?
how are they configured (e.g. exposure)?
They tend to export their finalized diagram as either an image or a PDF— which is fine if you want to print it out but considerably less helpful if another department (mine) has to work with the raw information. That's where I wondered if there's some kind of industry standard (SVG, XML, JSON, etc.) which is both supported by the programs these departments used and can be interpreted by some sort of programming language. Do you know anything like that?
Thanks in advance!

Mapbox Basics: How do I use one style, multiple shapefiles, for multiple webpages

This should be easy, as it's surely a fundamental function of this service, but I'm stumped.
I have 60 shapefiles. The outcome that I want is that I have 60 different webpages, each showing a different map, but all using one consistent style.
I have the style built in Mapbox, and I can add the shapefiles to the "style" in mapbox - but that is arguing with my understanding of separating style from content. It seems like the shapefiles should live in some data repository, and the style should live somewhere else, and that the API would mash the two together as required. As far as I can tell, it doesn't work that way. I'm hoping some experienced user can simplify this for me, because I'm surely just missing some basic understanding of the general workflow for the service.
When you upload your shapefiles to Mapbox studio, you create a source for your data on the Mapbox server.
You can then style your data further interactively on the run time using filter property and expressions. Can you tell me why you want to make 60 different maps? It might be worth looking into some examples to like create one map and filter it within the map view.

Using Watson AlchemyAPI on medical data

I'm trying to create a java app which uses information from a medical guideline to support the activity a doctor. The use case is that when the doctor asks a question or inputs a scenario to the system, it responds with the recommendations from the guideline that best fit the situation.
My idea is to extract name, relations and their knowledge graph from my document and use them to do some reasoning.
My questions are:
With AlchemyAPI can I extract entities using an external service? (Like a medical dictionary such as UMLS or MedlinePlus)
For those entities can I extract their knowledge graph and expand it with reasoning?
If it is not possible, would Knowledge Studio help me with this task? ( My document is a relatively small pdf, at maximum 100 pages)
This is a curiosity: Is there for Watson services some detailed Javadoc other than sdk doc, basic class tree, and tutorials?
Thank you for your help.
Take a look at the Natural Language Understanding (NLU) demo and see if the results based on some text from your use case are good enough. Otherwise, you will have to train a Knowledge Studio model and use it with NLU.
Watson doesn' have a knowledge graph that you can manipulate so you will have to develop this part. Once you get the entities from (1) you will have to create the knowledge graph.
Yes, see (1).
From your answer I assume you are using Java, in any case I think you need to first read the documentation for:
Natural Language Understanding
Knowledge Graph
Discovery. I think that those 100 pages you need to analyze could be stored using this service which will help you to also run some other NLP tasks on those documents.

BPMB visualization

We need to visualize BP (business process) into BPMN, but NOT by hands using modeler. We need to do it automatically in crm-web-based system written on PHP. I have input data (etc. array, xml, not care...(but not BPEL)), then I need to process it into nice BPMN graph (using SVG).
We have first nice-looking realization of it. We use matrix to draw: several times goes through matrix and optimize graph each time, no no, it working fast, but it not agile, hard to rebuilt, upgrade, add new features... We made this algorithm by ourselves (I mean we didn't find it in google or books). Problem is that we couldn't find any algorithms in the internet. I suppose we don't know correct keywords to do it. Every try returned us to BPEL vis. from BPMN, "Data flow vis." returned modelers...
Please help us to find some algorithms, or give correct keywords to find out information.
Think you're probably looking for "graph layout algorithms". The only library I'm aware of that can (I think) generate BPMN directly is the yFiles library from yWorks. It's not free. They do however offer a free application using the library that does auto-layout. Perhaps you could do some prototyping with that.
If that's not applicable, there are several other options. I'm not aware any of these can generate BPMN symbols directly; you'd have to construct the symbols. However all will auto-layout graphs according to various algorithms. Also all open source/free.
graphviz. Written in C. Quite old now but well used, stable and scalable.
tulip. Newer than graphviz. Haven't used it but heard good things about flexibility and scalability.
see also this post for javascript based options.
There are many more, just google for graph layout algorithms / libraries.
hth.

Open Source collaborative filtering frameworks

I was wondering if there exists any open source frameworks that will help me include the following type of functionality to my website:
1) If I am viewing a particular product, I would like to see what other products may be interesting to me. This information may be deduced by calculating for example what other people in my region (or any other characteristic of my profile) bought in addition to the product that I am viewing. Kind of like what Amazon.com does.
2) Deduce relationships between people based on their profile, interaction with one another on the website (via commenting on one another´s posts for example), use of the website in terms of areas most navigated, products bought in common etc.
I am not looking for a open source website with this functionality, but something like an object model into which I can feed information about users and their use of the site including rules about relationships and then at a later point ask it questions described in (1) and (2) above.
Any pointers to white papers / general information about best approaches to do this, or any related links will really help too.
(I am the developer of Taste, which is now part of Apache Mahout)
1) You're really asking for two things here:
a) Recommend items I might like
b) Favor items that are similar to the thing I am currently looking at.
Indeed, Mahout Taste is all about answering a). Everything it does supports systems like this. Take a look at the documentation to get started, and ask any questions to mahout-user#apache.org.
For 1b) in particular, Mahout has two answers:
If you are only interested in what items are similar to the current item, you would be interested in the ItemSimilarity abstraction in Mahout (org.apache.mahout.cf.taste.similarity.ItemSimilarity) and its implementations, like PearsonCorrelationSimilarity. Based on a set of user-item ratings, this could tell you an estimated similarity between any two items. You'd then just pick the most similar items. In fact, look at the TopItems class in Mahout which can just figure this for you quickly.
But also, you can combine a) and b) by computing recommendations, then applying a Rescorer implementation which then favors items that are similar to the currently-viewed item.
2) Yes likewise, you would be interesting the UserSimilarity abstraction, implementations, etc. This would deduce similarities based on item ratings. Mahout however does not help you deduce these ratings by, say, looking at user behavior. This is domain-specific and up to you.
Sound confusing -- read the docs and feel free to follow up on mahout-user#apache.org where I can tell you more.
I am researching the same topic, as I'm working on a project to help people decide how to vote on California's complicated ballot measures. Here are some open-source collaborative filtering engines that I've found:
Vogoo (PHP)
acts_as_recommendable (Ruby on Rails)
Mahout (formerly Taste) (Java)
There's also a good overview of these engines here.
There are also the Duine framework and OpenSlopeOne.
But in my opinion, Mahout is still the best.
You can find a survey about Open Source Recommender Systems here:
http://girlincomputerscience.blogspot.com.br/2012/11/open-source-recommendation-systems.html
Hope it helps!
You can find a List of Recommender Systems here