Ive mounted an orient db distributed instance. I launched the gremlin console, opened the graph.
Even though i can retrieve vertexes by index, i cannot do either the following.
g.V().has('#class','user').limit(10)
g.V().has('#class','user').valueMap()
g.V().has('#class','user').select('user_name')
and i get the following errors
No signature of method:
com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline.limit() is
applicable for argument types: (java.lang.Integer) values: [10]
Possible solutions: wait(), min(), last(), first(),
getAt(java.lang.Integer), wait(long)
No signature of method:
com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline.valueMap() is
applicable for argument types: () values: []
No signature of method:
com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline.select() is
applicable for argument types: (java.lang.String) values: [user_name]
Possible solutions: select(),
select([Lcom.tinkerpop.pipes.PipeFunction;),
select([Lgroovy.lang.Closure;), select(java.util.Collection),
select(java.util.Collection, [Lcom.tinkerpop.pipes.PipeFunction;),
select(java.util.Collection, [Lgroovy.lang.Closure;)
You are mixing versions. The syntax of your Gremlin is TinkerPop 3.x but you clearly aren't using a version of the OrientDB TinkerPop implementation that supports that. If you want to use that syntax then you need to use:
https://github.com/orientechnologies/orientdb-gremlin
Related
I have a query where the result should be ordered depending on the passed parameter:
#Query("""MATCH (u:User {userId:{uid}})-[:KNOWS]-(:User)-[h:HAS_STUFF]->(s:Stuff)
WITH s, count(h) as count ORDER BY count {order}
RETURN o, count SKIP {skip} LIMIT {limit}""")
fun findFromOthersByUserIdAndSortByAmountOfStuff(
#Param("uid") userId: String,
#Param("skip") skip: Int,
#Param("limit") limit: Int,
#Param("order) order: String): List<StuffWithCountResult>
For the order parameter I use the following enum and its sole method:
enum class SortOrder {
ASC,
DESC;
fun toNeo4JSortOrder(): String {
when(this) {
ASC -> return ""
DESC -> return "DESC"
}
}
}
It seems that SDN does not handle the {order} parameter properly? On execution, I get an exception telling that
Caused by: org.neo4j.kernel.impl.query.QueryExecutionKernelException: Invalid input 'R': expected whitespace, comment or a relationship pattern (line 3, column 5 (offset: 244))
" RETURN o, count SKIP {skip} LIMIT {limit}"
^
If I remove the parameter from the Cypher statement or replace it with a hardcoded DESC the method succeeds. I believe it's not because of the enum since I use (other) enums in other repository methods and all these methods succeed. I already tried a different parameter naming like sortOrder, but this did not help.
What am I missing here?
This is the wrong model for changing sorting and paging information. You can skip to the answer below for using those options, or continue reading for an explanation of what is wrong in your code as it stands.
You cannot bind where things aren't allowed to be bound:
You cannot bind a parameter into a syntax element of the query that is not setup for "parameter binding". Parameter binding doesn't do simple string substitutions (because you would be open for injection attacks) but rather uses binding API's to bind parameters. You are treating the query annotation like it is performing string substitution instead, and that is not what is happening.
The parameter binding docs for Neo4J and the Java manual for Query Parameters show exactly where you can bind, the only places allowed are:
in place of String Literals
in place of Regular Expressions
String Pattern Matching
Create node with properties, as the properties
Create multiple nodes with properties, as the properties
Setting all properties of a node
numeric values for SKIP and LIMIT
as the Node ID
as multiple Node IDs
Index Value
Index Query
There is nothing that says what you are trying is allowed, binding in the ORDER BY clause.
That isn't to say that the authors of Spring Data couldn't work around this and allow binding in other places, but it doesn't appear they have done more than what Neo4J Java API allows.
You can instead use the Sort class:
(the fix to allow this is marked for version 4.2.0.M1 which is a pre-release as of Sept 8, 2016, see below for using milestone builds)
Spring Data has a Sort class, if your #Query annotated method has a parameter of this type, it should apply sorting and allow that to dynamically modify the query.
I assume the code would look something like (untested):
#Query("MATCH (movie:Movie {title={0}})<-[:ACTS_IN]-(actor) RETURN actor")
List<Actor> getActorsThatActInMovieFromTitle(String movieTitle, Sort sort);
Or you can use the PageRequest class / Pageable interface:
(the fix to allow this is marked for version 4.2.0.M1 which is a pre-release as of Sept 8, 2016, see below for using milestone builds)
In current Spring Data + Neo4j docs you see examples using paging:
#Query("MATCH (movie:Movie {title={0}})<-[:ACTS_IN]-(actor) RETURN actor")
Page<Actor> getActorsThatActInMovieFromTitle(String movieTitle, PageRequest page);
(sample from Cypher Examples in the Spring Data + Neo4j docs)
And this PageRequest class also allows sorting parameterization. Anything that implements Pageable will do the same. Using Pageable instead is probably more proper:
#Query("MATCH (movie:Movie {title={0}})<-[:ACTS_IN]-(actor) RETURN actor")
Page<Actor> getActorsThatActInMovieFromTitle(String movieTitle, Pageable page);
You might be able to use SpEL in earlier versions:
As an alternative, you can look at using SpEL expressions to do substitutions in other areas of the query. I am not familiar with it but it says:
Since this mechanism exposes special parameter types like Sort or Pageable as well, we’re now able to use pagination in native queries.
But the official docs seem to say it is more limited.
And you should know this other information:
Here is someone reporting your exact same problem in a GitHub issue. Which then leads to DATAGRAPH-653 issue which was marked as fixed in version 4.2.0.M1. This references other SO questions here which are outdated so you should ignore those like Paging and sorting in Spring Data Neo4j 4 which are no longer correct.
Finding Spring Data Neo4j Milestone Builds:
You can view the dependencies information for any release on the project page. And for the 4.2.0.M1 build the information for Gradle (you can infer Maven) is:
dependencies {
compile 'org.springframework.data:spring-data-neo4j:4.2.0.M1'
}
repositories {
maven {
url 'https://repo.spring.io/libs-milestone'
}
}
Any newer final release should be used instead.
This is a non working try for using Flink fold with scala anonymous function:
val myFoldFunction = (x: Double, t:(Double,String,String)) => x + t._1
env.readFileStream(...).
...
.groupBy(1)
.fold(0.0, myFoldFunction : Function2[Double, (Double,String,String), Double])
It compiles well, but at execution, I get a "type erasure issue" (see below). Doing so in Java is fine, but of course more verbose. I like the concise and clear lambdas. How can I do that in scala?
Caused by: org.apache.flink.api.common.functions.InvalidTypesException:
Type of TypeVariable 'R' in 'public org.apache.flink.streaming.api.scala.DataStream org.apache.flink.streaming.api.scala.DataStream.fold(java.lang.Object,scala.Function2,org.apache.flink.api.common.typeinfo.TypeInformation,scala.reflect.ClassTag)' could not be determined.
This is most likely a type erasure problem.
The type extraction currently supports types with generic variables only in cases where all variables in the return type can be deduced from the input type(s).
The problem you encountered is a bug in Flink [1]. The problem originates from Flink's TypeExtractor and the way the Scala DataStream API is implemented on top of the Java implementation. The TypeExtractor cannot generate a TypeInformation for the Scala type and thus returns a MissingTypeInformation. This missing type information is manually set after creating the StreamFold operator. However, the StreamFold operator is implemented in a way that it does not accept a MissingTypeInformation and, consequently, fails before setting the right type information.
I've opened a pull request [2] to fix this problem. It should be merged within the next two days. By using then the latest 0.10 snapshot version, your problem should be fixed.
[1] https://issues.apache.org/jira/browse/FLINK-2631
[2] https://github.com/apache/flink/pull/1101
I am trying to implement an ML pipeline in Spark using Scala and I used the sample code available on the Spark website. I am converting my RDD[labeledpoints] into a data frame using the functions available in the SQlContext package. It gives me a NoSuchElementException:
Code Snippet:
Error Message:
Error at the line Pipeline.fit(training_df)
The type Vector you have inside your for-loop (prob: Vector) takes a type parameter; such as Vector[Double], Vector[String], etc. You just need to specify the type you data your vector will store.
As a site note: The single argument overloaded version of createDataFrame() you use seems to be experimental. In case you are planning to use it for some long term project.
The pipeline in your code snippet is currently empty, so there is nothing to be fit. You need to specify the stages using .setStages(). See the example in the spark.ml documentation here.
Python allows hash values only for immutable objects. For example,
hash((1,2,3))
works, but
hash([1,2,3])
raises a TypeError: unhashable type: 'list'. See the Python documentation. However, when I wrap a C++ class in Boost.Python via the usual boost::python::class_<> function, every generated Python class has a default hash function, where the hash value is related to the object's location in memory. (On my 64-bit OS, the hash value is the location divided by 8.)
When I expose a class to Python whose members can be changed (any mutable data structure, so this is a very common situation!), I do not want a default hash function but want a call to hash() raise the same TypeError as users receive for Python's own mutable data types. In particular, users shouldn't be able to accidentally use mutable objects as dictionary keys. How can I achieve this in the C++ code?
I found out how it goes:
boost::python::class_<MyClass>("MyClass")
.setattr("__hash__", boost::python::object());
A boost::python::object which is initialized with no arguments corresponds to None. The procedure for disabling hash generation in the pure Python C API is a little more complicated, as is described in the Python documentation. However, the above code snippet apparently does the job in boost::python.
On a sidenote: The Boost.Python behaviour mirrors the default behaviour of classes in Python, where objects are basically hashable as of object id (derived from id(x)):
>>> hash(object())
8795488122377
>>> class MyClass(object): pass
...
>>> hash(MyClass)
878579
>>> hash(MyClass())
8795488082665
>>>
I'm using Groovy for testing and Scala for actual code. Obviously I often use Scala's collection types - but when I generate test data in Groovy I often use the java.util.*-types.
I started writing static conversion methods based on the scalaj-collection library. But that's just not 'groovy'.
What's the best approach to convert one to the other?
Might implicit conversions work somehow?
UPDATE:
For example if I wouldn't manually convert the types I of course get:
groovy.lang.MissingMethodException:
No signature of method: static setup is applicable for argument types: (java.util.ArrayList)
Possible solutions: setup(scala.collection.immutable.List)
Did you try the "built-in" implicit conversions?
import scala.collection.JavaConversions._
Another approach is to change your Scala code to use Java collection types when declaring parameters and rely on implicit conversions in the method body to get the benefit of Scala collections operations.