Hapi Fhir importing Snomed-CT - hapi

I am a newbie in Hapi Fhir, in Fhir also. I'm trying to import Snomed-CT / US version on Hapi Fhir. I'm using the client to do this, this way:
To run the server:
java -jar hapi-fhir-cli.jar run-server
to upload snomed-ct
java -jar hapi-fhir-cli.jar upload-terminology -d SnomedCT_RF2Release_INT_20160131.zip -t http://localhost:8080/baseDstu3 -u http://snomed.info/sct
Code system is uploaded succesfully:
{
"resourceType": "Bundle",
"id": "948f4c4b-2e28-475b-a629-3d5122d5e103",
"meta": {
"lastUpdated": "2017-09-11T11:47:56.941+02:00"
},
"type": "searchset",
"total": 1,
"link": [
{
"relation": "self",
"url": "http://localhost:8080/baseDstu3/CodeSystem?_pretty=true"
}
],
"entry": [
{
"fullUrl": "http://localhost:8080/baseDstu3/CodeSystem/1",
"resource": {
"resourceType": "CodeSystem",
"id": "1",
"meta": {
"versionId": "1",
"lastUpdated": "2017-09-11T10:25:43.282+02:00"
},
"url": "http://snomed.info/sct",
"content": "not-present"
},
"search": {
"mode": "match"
}
}
]
}
But I can't find the codes! This is my ValueSet:
{
"resourceType": "Bundle",
"id": "37fff235-1229-4491-a3ab-9bdba2333d57",
"meta": {
"lastUpdated": "2017-09-11T11:49:35.553+02:00"
},
"type": "searchset",
"total": 0,
"link": [
{
"relation": "self",
"url": "http://localhost:8080/baseDstu3/ValueSet?_pretty=true"
}
]
}
This is extracted from my logs:
/hapi-fhir-cli/hapi-fhir-cli-app/target$ java -jar hapi-fhir-cli.jar run-server
------------------------------------------------------------
🔥 HAPI FHIR 3.0.0-SNAPSHOT - Command Line Tool
------------------------------------------------------------
Max configured JVM memory (Xmx): 1.7GB
Detected Java version: 1.8.0_144
------------------------------------------------------------
10:20:49 INFO ca.uhn.fhir.context.FhirContext - Creating new FHIR context for FHIR version [DSTU3]
10:20:49 INFO ca.uhn.fhir.cli.RunServerCommand - Preparing HAPI FHIR JPA server on port 8080
10:20:51 INFO ca.uhn.fhir.cli.RunServerCommand - Starting HAPI FHIR JPA server in DSTU3 mode
10:21:22 INFO ca.uhn.fhir.context.FhirContext - Creating new FHIR context for FHIR version [DSTU3]
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:47 INFO ca.uhn.fhir.cli.RunServerCommand - Server started on port 8080
10:21:47 INFO ca.uhn.fhir.cli.RunServerCommand - Web Testing UI : http://localhost:8080/
10:21:47 INFO ca.uhn.fhir.cli.RunServerCommand - Server Base URL: http://localhost:8080/baseDstu3/
10:22:09 INFO ca.uhn.fhir.context.FhirContext - Creating new FHIR context for FHIR version [DSTU3]
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 95ms for query 93e6a047-b93f-4e6c-8ee9-03b51d08bd45
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 96ms for query 93e6a047-b93f-4e6c-8ee9-03b51d08bd45
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:22:10 INFO c.u.f.j.d.d.SearchParamRegistryDstu3 - Refreshed search parameter cache in 108ms
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 67ms for query 60cc4f7c-887c-4fe6-9ca5-32f24017f91a
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 67ms for query 60cc4f7c-887c-4fe6-9ca5-32f24017f91a
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query a1ff7c92-b273-4ef4-8898-870c0377a161
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 3ms for query a1ff7c92-b273-4ef4-8898-870c0377a161
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:22:10 INFO c.uhn.fhir.rest.server.RestfulServer - Initializing HAPI FHIR restful server running in DSTU3 mode
10:22:10 INFO c.uhn.fhir.rest.server.RestfulServer - Added 117 resource provider(s). Total 117
10:22:10 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.AccountResourceProvider
....
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.TestScriptResourceProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.ValueSetResourceProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.VisionPrescriptionResourceProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Added 2 plain provider(s). Total 2
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.provider.dstu3.JpaSystemProviderDstu3
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.provider.dstu3.TerminologyUploaderProviderDstu3
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class org.hl7.fhir.dstu3.hapi.rest.server.ServerProfileProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.provider.dstu3.JpaConformanceProviderDstu3
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.rest.server.PageProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - A FHIR has been lit on this server
10:22:12 INFO c.u.f.n.BaseThymeleafNarrativeGenerator - Initializing narrative generator
10:22:13 INFO ca.uhn.fhir.to.Controller - Request(GET //localhost:8080/)#70976daf
10:22:20 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 3ms for query fad4433e-825c-41ae-b22e-9bbb0a1624a0
10:22:20 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 3ms for query fad4433e-825c-41ae-b22e-9bbb0a1624a0
10:22:20 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
...
10:23:50 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 3ms for query f0b25957-c166-4b23-be6e-0d06274da565
10:23:50 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 3ms for query f0b25957-c166-4b23-be6e-0d06274da565
10:23:50 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:23:52 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Beginning SNOMED CT processing
10:23:59 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Processing file SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Concept_Full_INT_20160131.txt
10:23:59 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Processed 1 records in SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Concept_Full_INT_20160131.txt
10:23:59 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Processed 100000 records in SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Concept_Full_INT_20160131.txt
10:24:00 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query 6393e886-2878-4c3a-b50a-a19139c01dd4
10:24:00 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 9ms for query 6393e886-2878-4c3a-b50a-a19139c01dd4
10:24:00 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
...
10:25:29 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Processed 4700000 records in SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Relationship_Full_INT_20160131.txt
10:25:30 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query df9f1505-e381-4eda-a04b-934293c54721
10:25:30 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 2ms for query df9f1505-e381-4eda-a04b-934293c54721
10:25:30 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:25:40 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query 1961cfd8-b2b0-4626-a397-6a7b925e4547
10:25:40 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 2ms for query 1961cfd8-b2b0-4626-a397-6a7b925e4547
10:25:40 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:25:41 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Looking for root codes
10:25:41 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Done loading SNOMED CT files - 3 root codes, 319446 total codes
10:25:41 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Scanning for circular refs - have scanned 0 / 3 codes (0.0%)
...
10:25:43 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 81ms for query 9aa7c541-8dfa-4176-8621-046c7ba886e4
10:25:43 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 82ms for query 9aa7c541-8dfa-4176-8621-046c7ba886e4
10:25:46 INFO ca.uhn.fhir.jpa.dao.BaseHapiFhirDao - Saving history entry CodeSystem/1/_history/1
10:25:46 INFO c.u.f.j.dao.BaseHapiFhirResourceDao - Successfully created resource "CodeSystem/1/_history/1" in 3.314ms
10:25:46 INFO c.u.f.j.term.HapiTerminologySvcDstu3 - CodeSystem resource has ID: CodeSystem/1
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Storing code system
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Deleting old code system versions
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Flushing...
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Done flushing
10:25:47 INFO c.u.f.j.term.BaseHapiTerminologySvc - Validating all codes in CodeSystem for storage (this can take some time for large sets)
10:25:47 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have validated 1000 concepts
...
10:25:49 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have validated 319000 concepts
10:25:49 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving version containing 319446 concepts
10:25:49 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving code system
10:25:50 INFO c.u.f.j.term.BaseHapiTerminologySvc - Setting codesystemversion on 319446 concepts...
10:25:50 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving 319446 concepts...
10:25:50 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have processed 1/319446 concepts (0%)
10:25:56 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have processed 10000/319446 concepts (3%)
...
10:25:56 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have processed 310000/319446 concepts (97%)
10:25:56 INFO c.u.f.j.term.BaseHapiTerminologySvc - Done saving concepts, flushing to database
10:25:57 INFO c.u.f.j.term.BaseHapiTerminologySvc - Done deleting old code system versions
10:25:57 INFO c.u.f.j.term.BaseHapiTerminologySvc - Note that some concept saving was deferred - still have 317446 concepts and 472410 relationships
10:25:57 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 7003ms for query a94cc0f2-b99e-4cce-9ba1-16b0b3ce70bb
10:25:57 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 7003ms for query a94cc0f2-b99e-4cce-9ba1-16b0b3ce70bb
10:25:57 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:26:00 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving 2000 deferred concepts...
10:26:04 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saved 2000 deferred concepts (315645 codes remain and 472410 relationships remain) in 3936ms (1ms / code)
...
10:26:25 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving 2000 deferred concepts...
10:26:26 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saved 2000 deferred concepts (305504 codes remain and 472410 relationships remain) in 852ms (0ms / code)
10:26:27 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 1ms for query c71cfc5b-02a4-4f16-a47f-15a947611b71
10:26:27 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 2ms for query c71cfc5b-02a4-4f16-a47f-15a947611b71
10:26:27 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
UPDATE
Ok, I see those concepts on database table TRM_CONCEPT and TRM_CONCEPT_LINK. But, Is there a way to query those concepts?

Related

GraphML inport into Titan

I'm new in Titan world. I would like to import data stored in GraphML file into a database.
I downloaded titan-1.0.0-hadoop1
I run ./titan.sh
I run ./gremlin.sh
In Gremlin console I wrote:
:remote connect tinkerpop.server ../conf/remote.yaml
Next, I wrote:
graph.io(IoCore.graphml()).readGraph("/tmp/file.graphml")
I got message:
No such property: graph for class: groovysh_evaluate
Could you help me?
IMO the most interesting logs from gremlin-server.log:
84 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/gremlin-server.yaml
158 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
160 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
196 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
197 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
1111 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [conf/gremlin-server/titan-berkeleyje-server.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
...
1113 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
1499 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded nashorn ScriptEngine
2044 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
2488 [main] WARN org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Could not initialize gremlin-groovy ScriptEngine with scripts/empty-sample.groovy as script could not be evaluated - javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
2488 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and configured ScriptEngines.
2581 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
2582 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
2719 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
2720 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
...
You need to create a graph. the graph keyword isn't declared anywhere in your script.
This is briefly covered in the Titan Server documentation, but it is easily overlooked.
The :> is the "submit" command which sends the Gremlin on that line to the currently active remote.
In step 5, you need to submit your script command to the remote server. In the Gremlin Console, you do this by starting your command with :submit or :> for shorthand.
:> graph.io(IoCore.graphml()).readGraph("/tmp/file.graphml")
If you don't submit the script to the remote server, the Gremlin Console will attempt to process the script within the console's JVM. graph is not defined locally, and that is why you saw the error in step 6.
Update: Based on your gremlin-server.log it looks like the issue is that the user that starts Titan with ./bin/titan.sh start doesn't have the appropriate file permissions to create the directory (db/berkeley) used by the default graph configuration (titan-berkeleyje-server.properties). Try updating the file permissions on the $TITAN_HOME directory.

OrientDB & .Net driver: Unable to read data from the transport connection

Getting error while reading network stream from a successful socket connection. PL see the debug log from orient DB:
2016-04-08 18:08:51:590 WARNI Not enough physical memory available for DISKCACHE: 1,977MB (heap=494MB). Set lower Maximum Heap (-Xmx setting on JVM) and restart OrientDB. Now
running with DISKCACHE=256MB [orientechnologies]
2016-04-08 18:08:51:606 INFO OrientDB config DISKCACHE=-566MB (heap=494MB os=1,977MB disk=16,656MB) [orientechnologies]
2016-04-08 18:08:51:809 INFO Loading configuration from: C:/inetpub/wwwroot/orientdb-2.1.5/config/orientdb-server-config.xml... [OServerConfigurationLoaderXml]
2016-04-08 18:08:52:292 INFO OrientDB Server v2.1.5 (build 2.1.x#r${buildNumber}; 2015-10-29 16:54:25+0000) is starting up... [OServer]
2016-04-08 18:08:52:370 INFO Databases directory: C:\inetpub\wwwroot\orientdb-2.1.5\databases [OServer]
2016-04-08 18:08:52:495 INFO Listening binary connections on 127.0.0.1:2424 (protocol v.32, socket=default) [OServerNetworkListener]
2016-04-08 18:08:52:511 INFO Listening http connections on 127.0.0.1:2480 (protocol v.10, socket=default) [OServerNetworkListener]
2016-04-08 18:08:52:573 INFO Installing dynamic plugin 'studio-2.1.zip'... [OServerPluginManager]
2016-04-08 18:08:52:838 INFO Installing GREMLIN language v.2.6.0 - graph.pool.max=50 [OGraphServerHandler]
2016-04-08 18:08:52:838 INFO [OVariableParser.resolveVariables] Error on resolving property: distributed [orientechnologies]
2016-04-08 18:08:52:854 INFO Installing Script interpreter. WARN: authenticated clients can execute any kind of code into the server by using the following allowed languages:
[sql] [OServerSideScriptInterpreter]
2016-04-08 18:08:52:854 INFO OrientDB Server v2.1.5 (build 2.1.x#r${buildNumber}; 2015-10-29 16:54:25+0000) is active. [OServer]
2016-04-08 18:08:57:986 INFO /127.0.0.1:49243 - Connected [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Writing short (2 bytes): 32 [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Flush [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Reading byte (1 byte)... [OChannelBinaryServer]
Using OrientDB .Net binary (C# driver) in Windows Vista. This was working fine until recently. Not sure what broke it...
Resetting TCP/IP using NetShell utility did not help.
Any help is highly appreciated.
The problem was with the AVG anti-virus program that is blocking the socket. Added an exception in the program for localhost to fix the problem.

Mongo connector and Neo4j doc manager shows no graph

I have installed neo4j doc manager as per the document. When I try to sync my mongodb data using the below command it waits infinitely:
Python35-32>mongo-connector -m l
ocalhost:27017 -t http://localhost:7474/db/data -d neo4j_doc_manager
Logging to mongo-connector.log.
The content of mongo-connector.log is as follows:
2016-02-26 19:10:11,809 [ERROR] mongo_connector.doc_managers.neo4j_doc_manager:70 - Bulk
The content of oplog.timestamp is as follows:
["Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, replicaset='myDevReplSet'), 'local'), 'oplog.rs')", 6255589333701492738]
EDIT:
If I initialize the mongo-connector with -v option, the mongo-connector.log file looks like the below:
2016-02-29 15:17:18,964 [INFO] mongo_connector.connector:1040 -
Beginning Mongo Connector 2016-02-29 15:17:19,005 [INFO]
mongo_connector.oplog_manager:89 - OplogThread: Initializing oplog
thread 2016-02-29 15:17:23,060 [INFO] mongo_connector.connector:295 -
MongoConnector: Starting connection thread
MongoClient(host=['localhost:27017'], document_class=dict,
tz_aware=False, connect=True, replicaset='myDevReplSet') 2016-02-29
15:17:23,061 [DEBUG] mongo_connector.oplog_manager:158 - OplogThread:
Run thread started 2016-02-29 15:17:23,061 [DEBUG]
mongo_connector.oplog_manager:160 - OplogThread: Getting cursor
2016-02-29 15:17:23,062 [DEBUG] mongo_connector.oplog_manager:670 -
OplogThread: reading last checkpoint as Timestamp(1456492891, 2)
2016-02-29 15:17:23,062 [DEBUG] mongo_connector.oplog_manager:654 -
OplogThread: oplog checkpoint updated to Timestamp(1456492891, 2)
2016-02-29 15:17:23,068 [DEBUG] mongo_connector.oplog_manager:178 -
OplogThread: Got the cursor, count is 1 2016-02-29 15:17:23,069
[DEBUG] mongo_connector.oplog_manager:185 - OplogThread: about to
process new oplog entries 2016-02-29 15:17:23,069 [DEBUG]
mongo_connector.oplog_manager:188 - OplogThread: Cursor is still alive
and thread is still running. 2016-02-29 15:17:23,069 [DEBUG]
mongo_connector.oplog_manager:194 - OplogThread: Iterating through
cursor, document number in this cursor is 0 2016-02-29 15:17:24,094
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still alive and thread is still running. 2016-02-29 15:17:25,095
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still alive and thread is still running. 2016-02-29 15:17:26,105
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still alive and thread is still running. 2016-02-29 15:17:27,107
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still running.
Nothing went wrong with my installation.
The data added to mongodb after the mongo-connector service started is automatically displayed in Neo4j. The data which are available in Mongodb before the service is started cannot be loaded into Neo4j.

Error after generating model

I wanted to lern xtext, for many years i lerned xpand and xtend and worked fine, but xtext seems to replaced the other both. And the xtext way looks fine to me.
As start i read follwing Tutorials: http://www.eclipse.org/Xtext/documentation/101_five_minutes.html, Including "15 Minutes Tutorial" and "15 Minutes Tutorial - Extended" and others. So i created a simple "Model"
grammar org.bs.test.Test with org.eclipse.xtext.common.Terminals
generate Test "http://www.bs.org/test/Test"
Test:
main=TMain;
TMain:
'main' name=ID
'done';
Generated on following way: "GenerateTest.mwe2" > right click > 'Run As' → 'MWE2 Workflow'.
Then made a copy of the Project. It was already my second or third try to find out what i made wrong.
Now i changed in the model following line: "main=TMain;" to "main=TMain?;". Then i used 'MWE2 Workflow' again, which run successful, but after running following happens:
Everthing files under 'src-gen/org/bs/test/Test/' and files in the subfolders 'impl' and 'util' are deleted. So they get deleted, then i copied the saved project and try following action on "Test.xtext" > right click > 'Run As' → 'Generate Xtext Artifacts', which result in the same.
There are two question for me:
1) What is the difference between "Generate Xtext Artifacts" and "MWE2 Workflow" and when did i do need them. I cannot figure that out on the Tutorial and especialy when to use them.
2) What did i wrong, and what i have to do to create generate the elements from the changed model
I could not find much on this, i hope someone could help me. I did not find something to both question.
EDIT 1:
When i create a complete new test project it works:
project name: org.test
name: org.test.MyTest
extensions: mytest
with following xtext:
grammar org.test.MyTest with org.eclipse.xtext.common.Terminals
generate myTest "http://www.test.org/MyTest"
Test:
main=TMain;
TMain:
'main' name=ID
'done'
But when i do the same with following input:
Project name: org.bs.craass
Name: org.bs.craass.CraAss
Extension: craass
xtext:
grammar org.bs.craass.CraAss with org.eclipse.xtext.common.Terminals
generate craAss "http://www.bs.org/craass/CraAss"
CraAss:
main=CAMain;
CAMain:
'main' name=ID
'done';
Later i will try following: install a new eclispe emf and create a new workspace.
EDIT2:
So i tested a new workspace, there it look liks, that it is working. Perhaps something with the old workspace. Like in a comment, in the orginal workspace, after i get a good "version", i wanted to put it on git (learning reason). Since then this not working anymore. Here some output of the generation:
0 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Registering platform uri 'C:\workspaces\emf_01'
401 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass at 'file:/C:/workspaces/emf_01/org.bs.craass/' and using 'file:/C:/workspaces/emf_01/error_01/org.bs.craass/' instead.
926 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass at 'file:/C:/workspaces/emf_01/error_01/org.bs.craass/' and using 'file:/C:/workspaces/emf_01/org.bs.craass/' instead.
939 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass.sdk at 'file:/C:/workspaces/emf_01/error_01/org.bs.craass.sdk/' and using 'file:/C:/workspaces/emf_01/org.bs.craass.sdk/' instead.
970 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass.tests at 'file:/C:/workspaces/emf_01/error_01/org.bs.craass.tests/' and using 'file:/C:/workspaces/emf_01/org.bs.craass.tests/' instead.
1090 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass.ui at 'file:/C:/workspaces/emf_01/error_01/org.bs.craass.ui/' and using 'file:/C:/workspaces/emf_01/org.bs.craass.ui/' instead.
1749 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass at 'file:/C:/workspaces/emf_01/org.bs.craass/' and using 'file:/C:/workspaces/emf_01/save_01/org.bs.craass/' instead.
1762 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass.sdk at 'file:/C:/workspaces/emf_01/org.bs.craass.sdk/' and using 'file:/C:/workspaces/emf_01/save_01/org.bs.craass.sdk/' instead.
1820 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass.tests at 'file:/C:/workspaces/emf_01/org.bs.craass.tests/' and using 'file:/C:/workspaces/emf_01/save_01/org.bs.craass.tests/' instead.
2082 [main] WARN lipse.emf.mwe.utils.StandaloneSetup - Skipping conflicting project org.bs.craass.ui at 'file:/C:/workspaces/emf_01/org.bs.craass.ui/' and using 'file:/C:/workspaces/emf_01/save_01/org.bs.craass.ui/' instead.
2577 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Adding generated EPackage 'org.eclipse.xtext.xbase.XbasePackage'
4253 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/Xtext/Xbase/XAnnotations' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
4265 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/xbase/Xtype' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
4335 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/xbase/Xbase' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
4335 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/common/JavaVMTypes' from 'platform:/resource/org.eclipse.xtext.common.types/model/JavaVMTypes.genmodel'
6234 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Adding generated EPackage 'org.eclipse.xtext.common.types.TypesPackage'
6267 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf_01\org.bs.craass\..\org.bs.craass\src-gen
6326 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf_01\org.bs.craass\..\org.bs.craass\model\generated
6330 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf_01\org.bs.craass\..\org.bs.craass.ui\src-gen
6378 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf_01\org.bs.craass\..\org.bs.craass.tests\src-gen
9146 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.bs.org/craass/CraAss' from 'file:/C:/workspaces/emf_01/org.bs.craass/model/generated/CraAss.genmodel'
15709 [main] INFO text.generator.junit.Junit4Fragment - generating Junit4 Test support classes
15731 [main] INFO text.generator.junit.Junit4Fragment - generating Compare Framework infrastructure
15973 [main] INFO .emf.mwe2.runtime.workflow.Workflow - Done.
I compared with a run in a other workspace, and the WARN, does not come there. To be hornest, i ignored first, because it was "only" warnings. A run that runs successful:
0 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Registering platform uri 'C:\workspaces\emf'
541 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Adding generated EPackage 'org.eclipse.xtext.xbase.XbasePackage'
1020 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/Xtext/Xbase/XAnnotations' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
1031 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/xbase/Xtype' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
1064 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/xbase/Xbase' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
1064 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/common/JavaVMTypes' from 'platform:/resource/org.eclipse.xtext.common.types/model/JavaVMTypes.genmodel'
2307 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Adding generated EPackage 'org.eclipse.xtext.common.types.TypesPackage'
2355 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass\src-gen
2382 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass\model\generated
2390 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass.ui\src-gen
2407 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass.tests\src-gen
4446 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.bs.org/craass/CraAss' from 'platform:/resource/org.bs.craass/model/generated/CraAss.genmodel'
11647 [main] INFO text.generator.junit.Junit4Fragment - generating Junit4 Test support classes
11719 [main] INFO text.generator.junit.Junit4Fragment - generating Compare Framework infrastructure
11997 [main] INFO .emf.mwe2.runtime.workflow.Workflow - Done.
So far the stand of my troubleshooting.
EDIT 3:
I do not know why, but it accept now the old xtext file i created and while running it occurs following error (but seems to have no big impact), complete log:
0 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Registering platform uri 'C:\workspaces\emf'
664 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Adding generated EPackage 'org.eclipse.xtext.xbase.XbasePackage'
1864 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/Xtext/Xbase/XAnnotations' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
1882 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/xbase/Xtype' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
1987 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/xbase/Xbase' from 'platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel'
1987 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.eclipse.org/xtext/common/JavaVMTypes' from 'platform:/resource/org.eclipse.xtext.common.types/model/JavaVMTypes.genmodel'
3982 [main] INFO lipse.emf.mwe.utils.StandaloneSetup - Adding generated EPackage 'org.eclipse.xtext.common.types.TypesPackage'
4018 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass\src-gen
4061 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass\model\generated
4064 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass.ui\src-gen
4087 [main] INFO ipse.emf.mwe.utils.DirectoryCleaner - Cleaning C:\workspaces\emf\org.bs.craass\..\org.bs.craass.tests\src-gen
7153 [main] INFO clipse.emf.mwe.utils.GenModelHelper - Registered GenModel 'http://www.bs.org/craass/CraAss' from 'platform:/resource/org.bs.craass/model/generated/CraAss.genmodel'
error(208): ../org.bs.craass/src-gen/org/bs/craass/parser/antlr/internal/InternalCraAss.g:1199:1: The following token definitions can never be matched because prior tokens match the same input: RULE_INT
error(208): ../org.bs.craass.ui/src-gen/org/bs/craass/ui/contentassist/antlr/internal/InternalCraAss.g:2688:1: The following token definitions can never be matched because prior tokens match the same input: RULE_INT
16642 [main] INFO text.generator.junit.Junit4Fragment - generating Junit4 Test support classes
16661 [main] INFO text.generator.junit.Junit4Fragment - generating Compare Framework infrastructure
16804 [main] INFO .emf.mwe2.runtime.workflow.Workflow - Done.
The trouble seems, that i have following:
grammar org.bs.craass.CraAss with org.eclipse.xtext.common.Terminals
but on the other side:
terminal INTEGER : '-'?('0'..'9')+;
terminal VAR_TERMINAL : '_' ('a'..'z'|'A'..'Z'|'_'|'0'..'9')*;
terminal REGISTER_TERMINAL : ('ax' | 'bx' );
terminal FUNCTION_TERMINAL : (('a'..'z'|'_'|'0'..'9')*'.')?('a'..'z'|'A'..'Z'|'_'|'0'..'9')*;
And in the org.eclipse.xtext.common.Terminals is
terminal INT returns ecore::EInt: ('0'..'9')+;
But i have no what to do with it.
The Problem itself, while generating everthing in srce-gen/, except the generated java files under src-gen/org.bs.craass.craAss and subfolder. Now there are created, too. So more i try to find out, so less the error was reproducable. Well, i will see, when i push it to git again, perhaps the error comes back.
So far thanks
running the workflow directly or calling generate language artefacts does the very same. the workflow reads your Xtext file and generates all infrastructure Xtext provides for your language. thus you have to call it if you change your grammar or the workflow itself. if you have a misconfiguration on your language or a broken grammar the generation may fail. also the workflow may refer to project names which may have to be adopted as well (dont know how you do the copy and paste - you should use the Xtext Project wizard to create the project to be safe)

MongoDb Hadoop connector using Pig support connection issue

ERROR 2118: Unable to connect to collection.Unable to connect to collection.
My Pig Code:
REGISTER /home/auto/ykale/jars/mongo/mongo-hadoop-pig_cdh3u3-1.1.0.jar
REGISTER /home/auto/ykale/jars/mongo/mongo-hadoop-core_cdh3u3-1.1.0.jar
REGISTER /home/auto/ykale/jars/mongo/mongo-hadoop-streaming_cdh3u3-1.1.0.jar
REGISTER /home/auto/ykale/jars/mongo/com.mongodb_2.6.5.1.jar
--name1 = load 'mongodb://hfdvmprmongodb1.vm.itg.corp.us.shldcorp.com:27017/member_pricing.testData' USING com.mongodb.hadoop.pig.MongoLoader;
name1 = load 'mongodb://ykale:newpassword4#hfdvmprmongodb1.vm.itg.corp.us.shldcorp.com:27017/member_pricing.testData' USING com.mongodb.hadoop.pig.MongoLoader;
STORE name1 into '/user/ykale/mongo_dump/file1';
When I use the other load command which is commented out in the above code, I get the output as follows, which assigns 0 map reduce jobs.
2013-12-11 05:16:24,769 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-12-11 05:16:24,769 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 0 map reduce job(s) failed!
2013-12-11 05:16:24,770 [main] INFO org.apache.pig.tools.pigstats.PigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
0.20.2-cdh3u3 0.8.1-cdh3u3 ykale 2013-12-11 05:16:22 2013-12-11 05:16:24 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
Input(s):
Output(s):
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
null
2013-12-11 05:16:24,770 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
I am executing the Pig script with Dmongo.input.split.create_input_splits=false
Any help appreciated.