is there any way to run Tinkerpop Gremlin 3.1 traversals in OrientDB?
I've noticed that currently the DBMS supports the previous version (2.x) of the Tinkerpop traversal language which, for example, only allows to directly filter edges by label, but not vertices :( .
I was quite satisfied with gremlin-scala and orientDB-gremlin but I found that not all my queries where efficiently executed (some indexes were ignored).
Is there any other way?
Thanks in advance :)
Orientdb-gremlin is indeed the only available driver, and while it works pretty well for base cases there's some work left for index usage. If you report your cases in a github issue we can have a look. Best is obviously if you submit a PR :)
Related
I have already looked at Efficient paging in MongoDB using mgo and asked https://stackoverflow.com/review/low-quality-posts/25723764
I got the excelent response provided by #icza who shares his library https://github.com/icza/minquery.
However, as he said, "Starting with MongoDB 4.2, an index hint must be provided. Use the minquery.NewWithHint() constructor."
The problem is that minquery.NewWithHint() constructor seems to only be available in version 2.0.0, which changed gopkg.in/mgo.v2 support for github.com/globalsign/mgo support.
How can I solve this problem ?
gopkg.in/mgo.v2 has long gone unmaintained. The easiest solution for you would be to switch to the github.com/globalsign/mgo mgo driver. It has identical API, so most likely you only have to change the import paths. It is sill somewhat supported, but I believe it will fade away in favor of the official mongo-go driver. If you would choose to switch to mongo-go, that has "built-in" support for specifying the index min parameter for queries. But know that the mongo-go driver has different API.
Another option would be to fork minquery, and apply the commits I made to the v2.0.0 version, including support for the index hints.
I'd like to use MongoDB with Linq, simply because I do not like to not being able to check the query at compile time.
So I searched a bit and found Norm. However I am having a hard time deciding if it's "safe" to move from the official driver.
So I was wondering if can someone tell me the key differences between the official driver and Norm ?
Also what can Norm do that the official driver can't ?
Is it possible to implement Linq on top of the official driver ?
Thanks in advance
I suggest to use mongodb official driver because it will contains all latest features and any issue will be fixed asap. As i know last commit in norm repository was almost a half of year ago, so.. If you want linq support you can use fluent mongo at the top of the official driver, but i believe that linq support should be soon in official driver.
I'm wondering about which driver is the best between the following :
mongodb-csharp driver
simple-mongodb driver
NoRM
which one consider the best !>
I think there are even more flavours: the one you call mongodb-csharp is actually two:
https://github.com/samus/mongodb-csharp
https://github.com/mongodb/mongo-csharp-driver
The first is a bit more mature and is used widely in the field. The second is a recent development, but is coming from 10gen, the creators of mongodb. Implementation is looking good and is rather like the samus driver. If you need in production something right now, I'm not sure what to advise, but in the long run, I'd go for the 10gen driver.
The one thing that it currently doesn't offer is Linq integration. This could be important to you.
I have no experience with the NORM and simple-mongdb drivers.
I would use the official c# driver released by mongoDB.
http://www.mongodb.org/display/DOCS/CSharp+Language+Center
I've been using it and I like it so far.
I know parallel collections will become available.
What form will these take, and what else are we likely to see?
For the full list, see: Beyond 2.8 - A Roadmap
The main thing seems to be parallel collections. They are drop-in replacement for the scala collections, but the methods are executed in parallel.
From the scala days presentation by Aleksandar Prokopec:
Scala parallel collections that will
be introduced in 2.8 reimplement
standard collection operations while
keeping compatibility with existing
Scala collection framework. They also
introduce new operations
characteristic for parallel
algorithms, and a few contracts the
programmer should be aware of.
For a good video explanation of parallel collections, see Scala Parallel Collections - Aleksandar Prokopec
Have a look at this: Changes between Scala 2.8 and Scala 2.9
http://www.infoq.com/interviews/martin-odersky-scala-future
It's in Jan 2011 fairly recent. Might help you ^_^.
Official site: Scala 2.9.0 RC1 (from 2011-03-25, scala-lang.org)
I'd like to find out good and robust MapReduce framework, to be utilized from Scala.
To add to the answer on Hadoop: there are at least two Scala wrappers that make working with Hadoop more palatable.
Scala Map Reduce (SMR): http://scala-blogs.org/2008/09/scalable-language-and-scalable.html
SHadoop: http://jonhnny-weslley.blogspot.com/2008/05/shadoop.html
UPD 5 oct. 11
There is also Scoobi framework, that has awesome expressiveness.
http://hadoop.apache.org/ is language agnostic.
Personally, I've become a big fan of Spark
http://spark-project.org/
You have the ability to do in-memory cluster computing, significantly reducing the overhead you would experience from disk-intensive mapreduce operations.
You may be interested in scouchdb, a Scala interface to using CouchDB.
Another idea is to use GridGain. ScalaDudes have an example of using GridGain with Scala. And here is another example.
A while back, I ran into exactly this problem and ended up writing a little infrastructure to make it easy to use Hadoop from Scala. I used it on my own for a while, but I finally got around to putting it on the web. It's named (very originally) ScalaHadoop.
For a scala API on top of hadoop check out Scoobi, it is still in heavy development but shows a lot of promise. There is also some effort to implement distributed collections on top of hadoop in the Scala incubator, but that effort is not usable yet.
There is also a new scala wrapper for cascading from Twitter, called Scalding.
After looking very briefly over the documentation for Scalding it seems
that while it makes the integration with cascading smoother it still does
not solve what I see as the main problem with cascading: type safety.
Every operation in cascading operates on cascading's tuples (basically a
list of field values with or without a separate schema), which means that
type errors, I.e. Joining a key as a String and key as a Long leads
to run-time failures.
to further jshen's point:
hadoop streaming simply uses sockets. using unix streams, your code (any language) simply has to be able to read from stdin and output tab delimited streams. implement a mapper and if needed, a reducer (and if relevant, configure that as the combiner).
I've added MapReduce implementation using Hadoop on Github with few test cases here: https://github.com/sauravsahu02/MapReduceUsingScala.
Hope that helps. Note that the application is already tested.