The following query when run from OrientDB studio gives error.
Query:
g.V('userId', 'SDWEQS').repeat(out()).until(has('organizationId','org1'));
Error:
groovy.lang.MissingMethodException: No signature of method: com.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.out() is applicable for argument types: () values: []
Possible solutions: put(java.lang.String, java.lang.Object), get(java.lang.String), wait(), any(), dump(), wait(long)
I tried running the query using the OrientJs Node app but get the same error. I created the same graph on gremlin server and this query works there.
My question is whether orientdb supports repeat()? Please suggest alternatives to make the above query work.
What I found is that OrientDB is only supporting gremlin queries of Tinkerpop 2.x. And "repeat()" wasn't part of 2.x. I will have to rework the query to use "loop()" instead.
See my answer here:
https://stackoverflow.com/a/54775290/1211805
Basically just use the OrientDB REST API (port 2480)
Related
I'm making an api in which I separate my requests by responsibilities, services, repository and routes, I'm using typeorm to connect to the database, and I'm trying to implement unit tests to test my application so I have to unclip my code from the typeorm of so that my tests are not depending on it. However, at the time of decoupling, the typeorm is no longer able to connect with the bank, which is mongodb, returns the following error: ConnectionNotFoundError: Connection "mongo" was not found.
I've been reading about it and from what I understood the typeorm wouldn't give full support to the mongo, that's why it failed. I would like to confirm with you if that would be the case and what would be the best alternative to solve this.
MongoDB is supported by TypeORM (although I have not used it personally myself).
You can follow a short tutorial here.
What is not supported for Mongo is the typeorm "Migration" feature. For example see typeorm issue 6695. Migration is an advanced topic and you probably do not need it. But if you do, a workaround is on stackoverflow answer here.
But this Migration issue has nothing to do with your error. Almost certainly, your getConnection() call or ormconfig.json are incorrect. Start a with simple project based on the TypeORM Mongo tutorial, nothing else, verify you can connect to MongoDB, and work up from there.
I want to use DocumentDB but there is no connector for PySpark. Looks like DocumentDB also supports MongoDB Protocol as mentioned here, which means all existing MongoDB drivers should work. Since there is PySpark connector for MongoDB, I wanted to try this out.
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
This throws error.
com.mongodb.MongoCommandException: Command failed with error 115: ''$sample' is not supported' on server example.documents.azure.com:10250. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 115, "errmsg" : "'$sample' is not supported", "$err" : "'$sample' is not supported" }
It looks like DocumentDB MongoDB API doesn't support all MongoDB features, but I can't find any documentation about. Or am I missing something else?
I want to use DocumentDB but there is no connector for PySpark.
A preview of a Spark to DocumentDB connector (including a pyDocumentDB package) was made available in early April 2017.
Looks like DocumentDB also supports MongoDB Protocol as mentioned here, which means all existing MongoDB drivers should work
DocumentDB supports the MongoDB wire protocol for communication and reports its version as MongoDB 3.2.0, but this does not mean that it is a drop-in replacement with full support for all MongoDB features (or that DocumentDB implements features with identical behaviour and limits). A notable absence at the moment is any support for MongoDB's aggregation pipeline, which includes the $sample operator that the PySpark connector is expecting to be available given a connection to a server claiming to be MongoDB 3.2.
You can find more examples of potential compatibility issues in the comments on the DocumentDB API for MongoDB documentation you referenced in your question.
I use Reactive Couchbase (this is Scala port for Java SDK - https://github.com/ReactiveCouchbase/ReactiveCouchbase-core)
And for query this use http endpoint (http:// mycouchbaseadress:8093 /query?q=N1QL Comand) but response for server is "Unrecognized parameter in request: q".
I Find in stackoverflow to start cbq-engine so try to launch 'cbq-engine -couchbase http:// mycouchbaseadress:8093 /' but have error ''flag provided but not defined: -couchbase"
My couchbase version is 4.1 community
Do you know how I can send my n1ql query to server by endpoint?
It seems like there is a bug in ReactiveCouchbase, or at least its N1QL support was developed against an outdated beta version of the feature.
With Couchbase Server 4.0 GA and above, you don't need to run cbq-engine (this was the process used during N1QL's beta).
The problem is that in the code, the q= parameter is used where it should now be statement= (or a JSON body).
There is a open pull-request that happens to fix that issue among other things, but it's been opened a long time.
I'm trying to retrieve projects metrics using the REST Api. Therefore I first query the projects using "/api/projects/index". Afterwards I retrieve the metrics using "/api/metrics/search". Both works fine. And I result with:
[id:35476, k:com.test:TestProject, nm:TestProject, qu:TRK, sc:PRJ]
[custom:false, description:Cyclomatic complexity, direction:-1, domain:Complexity, hidden:false, id:10019, key:complexity, name:Complexity, qualitative:false, type:INT]
Now I wanted to retrieve a projects metrics. Therefore I use the following URL:
https://MYHOST/sonarqube/api/timemachine/index?resource=35476&metric=10019&fromDateTime=2010-12-25T23:59:59+0100&toDateTime=2018-12-25T23:59:59+0100
There the server retruns only: [{"cols":[],"cells":[]}]
This surprices me, because when I enter the WebInterface of sonar for the project, I can see numbers. I tried some other metrics, however all ended with the same result. What am I doing wrong?
You didn't mention server version, so I'll assume the latest: 5.2.
I got the same result for a bare query (http://nemo.sonarqube.org/api/timemachine/index), and for a query which specified resource but not metrics (http://nemo.sonarqube.org/api/timemachine/index?resource=org.sonarsource.sonarqube%3Asonarqube).
So I'm guessing there's a problem with either your resource or metric id. Try using the keys (com.test&%3ATestProject, and complexity) instead.
And yes, the ids you got back from the other web services should work here, but what's meant by "id" can be a little... ah... variable from service to service to service.
I'm using Titan 0.3.2 in embedded mode with Cassandra and Elasticsearch. I am using the configuration documented in the titan docs for my cassandra-es.properties (fed into titan.sh/titan.bat):
storage.backend=embeddedcassandra
storage.cassandra-config-dir=config/cassandra.yaml
storage.index.search.backend=elasticsearch
storage.index.search.directory=/tmp/searchindex
storage.index.search.client-only=false
storage.index.search.local-mode=true
But I'm trying to get the right configuration for bin/cassandra-es.local to connect to the Titan server via the Gremlin client shell (with g = TitanFactory.open("cassandra-es.local") ). If I try to use the default version included with the download:
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
The graph won't know anything about the ES index ("Index is unknown or not configured: search").
If I configure it with:
storage.backend=cassandrathrift
storage.hostname=127.0.0.1
storage.index.search.backend=elasticsearch
storage.index.search.client-only=false
storage.index.search.directory=/tmp/cassandra/elasticsearch
It will create an ES instance on another port that seems to exist separately from the one used by the server.
My question: (how) can I set up my Gremlin console to properly communicate with the Titan Embedded Server?
There was some recent discussion about this on the Google group. It looks like it's actually not possible to run two ES instances on one machine, so one of the easier ways around this is to set up ES separately on a VM.
I tried out this solution, and it works fine with these lines in both cassandra-es.local and titan-server-cassandra-es.properties:
storage.index.search.backend=elasticsearch
storage.index.search.hostname=<VM ES server IP>
storage.index.search.client-only=true
I can now access the same ES index from both the Gremlin shell and the Titan server.