I have installed neo4j doc manager as per the document. When I try to sync my mongodb data using the below command it waits infinitely:
Python35-32>mongo-connector -m l
ocalhost:27017 -t http://localhost:7474/db/data -d neo4j_doc_manager
Logging to mongo-connector.log.
The content of mongo-connector.log is as follows:
2016-02-26 19:10:11,809 [ERROR] mongo_connector.doc_managers.neo4j_doc_manager:70 - Bulk
The content of oplog.timestamp is as follows:
["Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, replicaset='myDevReplSet'), 'local'), 'oplog.rs')", 6255589333701492738]
EDIT:
If I initialize the mongo-connector with -v option, the mongo-connector.log file looks like the below:
2016-02-29 15:17:18,964 [INFO] mongo_connector.connector:1040 -
Beginning Mongo Connector 2016-02-29 15:17:19,005 [INFO]
mongo_connector.oplog_manager:89 - OplogThread: Initializing oplog
thread 2016-02-29 15:17:23,060 [INFO] mongo_connector.connector:295 -
MongoConnector: Starting connection thread
MongoClient(host=['localhost:27017'], document_class=dict,
tz_aware=False, connect=True, replicaset='myDevReplSet') 2016-02-29
15:17:23,061 [DEBUG] mongo_connector.oplog_manager:158 - OplogThread:
Run thread started 2016-02-29 15:17:23,061 [DEBUG]
mongo_connector.oplog_manager:160 - OplogThread: Getting cursor
2016-02-29 15:17:23,062 [DEBUG] mongo_connector.oplog_manager:670 -
OplogThread: reading last checkpoint as Timestamp(1456492891, 2)
2016-02-29 15:17:23,062 [DEBUG] mongo_connector.oplog_manager:654 -
OplogThread: oplog checkpoint updated to Timestamp(1456492891, 2)
2016-02-29 15:17:23,068 [DEBUG] mongo_connector.oplog_manager:178 -
OplogThread: Got the cursor, count is 1 2016-02-29 15:17:23,069
[DEBUG] mongo_connector.oplog_manager:185 - OplogThread: about to
process new oplog entries 2016-02-29 15:17:23,069 [DEBUG]
mongo_connector.oplog_manager:188 - OplogThread: Cursor is still alive
and thread is still running. 2016-02-29 15:17:23,069 [DEBUG]
mongo_connector.oplog_manager:194 - OplogThread: Iterating through
cursor, document number in this cursor is 0 2016-02-29 15:17:24,094
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still alive and thread is still running. 2016-02-29 15:17:25,095
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still alive and thread is still running. 2016-02-29 15:17:26,105
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still alive and thread is still running. 2016-02-29 15:17:27,107
[DEBUG] mongo_connector.oplog_manager:188 - OplogThread: Cursor is
still running.
Nothing went wrong with my installation.
The data added to mongodb after the mongo-connector service started is automatically displayed in Neo4j. The data which are available in Mongodb before the service is started cannot be loaded into Neo4j.
Related
I was run flutter build apk command then I am facing this exception below:
FAILURE: Build failed with an exception.
* What went wrong:
Could not dispatch a message to the daemon.
The message received from the daemon indicates that the daemon has disappeared.
Build request sent: Build{id=68f521a5-3aa3-4a37-be49-737d539a0090, currentDir=C:\Users\Lenovo\AndroidStudioProjects\subidha-pharmacy\android}
Attempting to read last messages from the daemon log...
Daemon pid: 10516
log file: C:\Users\Lenovo\.gradle\daemon\5.1.1\daemon-10516.out.log
----- Last 20 lines from daemon log file - daemon-10516.out.log -----
14:41:46.324 [DEBUG] [org.gradle.process.internal.health.memory.MemoryManager] Emitting OS memory status event {Total: 16627265536, Free: 448
6569984}
14:41:46.324 [DEBUG] [org.gradle.launcher.daemon.server.health.LowMemoryDaemonExpirationStrategy] Received memory status update: {Total: 1662
7265536, Free: 4486569984}
14:41:46.324 [DEBUG] [org.gradle.process.internal.health.memory.MemoryManager] Emitting JVM memory status event {Maximum: 1431830528, Committ
ed: 1229455360}
14:41:46.605 [DEBUG] [sun.rmi.transport.tcp] RMI TCP Connection(19)-127.0.0.1: (port 61852) connection closed
14:41:46.606 [DEBUG] [sun.rmi.transport.tcp] RMI TCP Connection(19)-127.0.0.1: close connection
14:41:47.590 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
14:41:47.594 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
14:41:47.595 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired on daemon addresses registry.
14:41:47.595 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
14:41:47.596 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
14:41:47.597 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired on daemon addresses registry.
14:41:47.597 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
14:41:47.598 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
14:41:47.599 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired on daemon addresses registry.
14:41:47.599 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
14:41:50.130 [DEBUG] [sun.rmi.transport.tcp] RMI TCP Connection(20)-127.0.0.1: (port 61852) connection closed
14:41:50.130 [DEBUG] [sun.rmi.transport.tcp] RMI TCP Connection(20)-127.0.0.1: close connection
14:41:51.313 [DEBUG] [org.gradle.process.internal.health.memory.MemoryManager] Emitting OS memory status event {Total: 16627265536, Free: 448
3686400}
14:41:51.313 [DEBUG] [org.gradle.launcher.daemon.server.health.LowMemoryDaemonExpirationStrategy] Received memory status update: {Total: 1662
7265536, Free: 4483686400}
14:41:51.314 [DEBUG] [org.gradle.process.internal.health.memory.MemoryManager] Emitting JVM memory status event {Maximum: 1431830528, Committ
ed: 1229455360}
----- End of the daemon log --enter code here---
FAILURE: Build failed with an exception.
What went wrong:
Could not dispatch a message to the daemon.
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full in
sights.
Get more help at https://help.gradle.org
Running Gradle task 'assembleRelease'...
Running Gradle task 'assembleRelease'... Done 424.7s (!)
Gradle task assembleRelease failed with exit code 1
flutter clean
flutter build apk
I am trying to run Apache Atlas in a standalone fashion on Ubuntu - meaning without having to setup Solr and/or HBase.
What I did (according to the documentation: http://atlas.apache.org/0.8.1/InstallationSteps.html) was cloning the Git repository, build the maven project with embadded HBase an dSolr:
mvn clean package -Pdist,embedded-hbase-solr
Unpacked the resuting tar.gz file and executed bin/atlas_start.py - without having changed any configuration. To my understanding of the documentatino that should actually start up HBase along with Atlas - right?
The is what I find in logs/applocation.log:
2017-11-30 17:14:24,093 INFO - [main:] ~ >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (Atlas:216)
2017-11-30 17:14:24,093 INFO - [main:] ~ Server starting with TLS ? false on port 21000 (Atlas:217)
2017-11-30 17:14:24,093 INFO - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Atlas:218)
2017-11-30 17:14:27,684 INFO - [main:] ~ No authentication method configured. Defaulting to simple authentication (LoginProcessor:102)
2017-11-30 17:14:28,527 INFO - [main:] ~ Logged in user daniel (auth:SIMPLE) (LoginProcessor:77)
2017-11-30 17:14:31,777 INFO - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189)
2017-11-30 17:14:39,456 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:39,594 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:40,593 WARN - [main-SendThread(localhost:2181):] ~ Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (ClientCnxn$SendThread:110$
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
...
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-11-30 17:14:56,185 WARN - [main:] ~ Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = Connecti$
2017-11-30 17:14:56,186 ERROR - [main:] ~ ZooKeeper exists failed after 4 attempts (RecoverableZooKeeper:277)
2017-11-30 17:14:56,186 WARN - [main:] ~ hconnection-0x1dba4e060x0, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) (ZKUtil:544)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
To me it reads as if no HBase (and Zookeeper) are started by the script.
Am I missing something?
Thanks for your hints!
OK, meanwhile I figured out the issue. The start script obviously does not execute the script conf/atlas-env.sh which sets some environment variable. Among this are MANAGE_LOCAL_HBASE and MANAGE_LOCAL_SOLR. So if you set those two env vars to true (and set JAVA_HOME properly which is needed for the embedded HBase), then Atlas automatically starts HBase and Solr - and we get a local running instance of Atlas!
Maybe this helps someone who comes across the same issue in future!
Update March 2021
There are two ways of running apache atlas:
A) Building it from scratch:
git clone https://github.com/apache/atlas
mvn clean install -DskipTests
mvn clean package -Pdist -DskipTests
Running atlas_start.py:
python <atlas-directory>/conf/atlas_start.py
B) Using docker image:
docker-compose.yml
version: "3.3"
services:
atlas:
image: sburn/apache-atlas
container_name: atlas
ports:
- "21000:21000"
volumes:
- "./bash_script:/app"
command: bash -exc "/opt/apache-atlas-2.1.0/bin/atlas_start.py"
docker-compose up
I am a newbie in Hapi Fhir, in Fhir also. I'm trying to import Snomed-CT / US version on Hapi Fhir. I'm using the client to do this, this way:
To run the server:
java -jar hapi-fhir-cli.jar run-server
to upload snomed-ct
java -jar hapi-fhir-cli.jar upload-terminology -d SnomedCT_RF2Release_INT_20160131.zip -t http://localhost:8080/baseDstu3 -u http://snomed.info/sct
Code system is uploaded succesfully:
{
"resourceType": "Bundle",
"id": "948f4c4b-2e28-475b-a629-3d5122d5e103",
"meta": {
"lastUpdated": "2017-09-11T11:47:56.941+02:00"
},
"type": "searchset",
"total": 1,
"link": [
{
"relation": "self",
"url": "http://localhost:8080/baseDstu3/CodeSystem?_pretty=true"
}
],
"entry": [
{
"fullUrl": "http://localhost:8080/baseDstu3/CodeSystem/1",
"resource": {
"resourceType": "CodeSystem",
"id": "1",
"meta": {
"versionId": "1",
"lastUpdated": "2017-09-11T10:25:43.282+02:00"
},
"url": "http://snomed.info/sct",
"content": "not-present"
},
"search": {
"mode": "match"
}
}
]
}
But I can't find the codes! This is my ValueSet:
{
"resourceType": "Bundle",
"id": "37fff235-1229-4491-a3ab-9bdba2333d57",
"meta": {
"lastUpdated": "2017-09-11T11:49:35.553+02:00"
},
"type": "searchset",
"total": 0,
"link": [
{
"relation": "self",
"url": "http://localhost:8080/baseDstu3/ValueSet?_pretty=true"
}
]
}
This is extracted from my logs:
/hapi-fhir-cli/hapi-fhir-cli-app/target$ java -jar hapi-fhir-cli.jar run-server
------------------------------------------------------------
🔥 HAPI FHIR 3.0.0-SNAPSHOT - Command Line Tool
------------------------------------------------------------
Max configured JVM memory (Xmx): 1.7GB
Detected Java version: 1.8.0_144
------------------------------------------------------------
10:20:49 INFO ca.uhn.fhir.context.FhirContext - Creating new FHIR context for FHIR version [DSTU3]
10:20:49 INFO ca.uhn.fhir.cli.RunServerCommand - Preparing HAPI FHIR JPA server on port 8080
10:20:51 INFO ca.uhn.fhir.cli.RunServerCommand - Starting HAPI FHIR JPA server in DSTU3 mode
10:21:22 INFO ca.uhn.fhir.context.FhirContext - Creating new FHIR context for FHIR version [DSTU3]
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 10000, SQLState: 01J01
10:21:46 WARN o.h.e.jdbc.spi.SqlExceptionHelper - Database 'directory:target/jpaserver_derby_files' not created, connection made to existing database instead.
10:21:47 INFO ca.uhn.fhir.cli.RunServerCommand - Server started on port 8080
10:21:47 INFO ca.uhn.fhir.cli.RunServerCommand - Web Testing UI : http://localhost:8080/
10:21:47 INFO ca.uhn.fhir.cli.RunServerCommand - Server Base URL: http://localhost:8080/baseDstu3/
10:22:09 INFO ca.uhn.fhir.context.FhirContext - Creating new FHIR context for FHIR version [DSTU3]
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 95ms for query 93e6a047-b93f-4e6c-8ee9-03b51d08bd45
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 96ms for query 93e6a047-b93f-4e6c-8ee9-03b51d08bd45
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:22:10 INFO c.u.f.j.d.d.SearchParamRegistryDstu3 - Refreshed search parameter cache in 108ms
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 67ms for query 60cc4f7c-887c-4fe6-9ca5-32f24017f91a
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 67ms for query 60cc4f7c-887c-4fe6-9ca5-32f24017f91a
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query a1ff7c92-b273-4ef4-8898-870c0377a161
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 3ms for query a1ff7c92-b273-4ef4-8898-870c0377a161
10:22:10 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:22:10 INFO c.uhn.fhir.rest.server.RestfulServer - Initializing HAPI FHIR restful server running in DSTU3 mode
10:22:10 INFO c.uhn.fhir.rest.server.RestfulServer - Added 117 resource provider(s). Total 117
10:22:10 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.AccountResourceProvider
....
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.TestScriptResourceProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.ValueSetResourceProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.rp.dstu3.VisionPrescriptionResourceProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Added 2 plain provider(s). Total 2
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.provider.dstu3.JpaSystemProviderDstu3
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.provider.dstu3.TerminologyUploaderProviderDstu3
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class org.hl7.fhir.dstu3.hapi.rest.server.ServerProfileProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.jpa.provider.dstu3.JpaConformanceProviderDstu3
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - Scanning type for RESTful methods: class ca.uhn.fhir.rest.server.PageProvider
10:22:11 INFO c.uhn.fhir.rest.server.RestfulServer - A FHIR has been lit on this server
10:22:12 INFO c.u.f.n.BaseThymeleafNarrativeGenerator - Initializing narrative generator
10:22:13 INFO ca.uhn.fhir.to.Controller - Request(GET //localhost:8080/)#70976daf
10:22:20 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 3ms for query fad4433e-825c-41ae-b22e-9bbb0a1624a0
10:22:20 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 3ms for query fad4433e-825c-41ae-b22e-9bbb0a1624a0
10:22:20 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
...
10:23:50 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 3ms for query f0b25957-c166-4b23-be6e-0d06274da565
10:23:50 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 3ms for query f0b25957-c166-4b23-be6e-0d06274da565
10:23:50 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:23:52 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Beginning SNOMED CT processing
10:23:59 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Processing file SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Concept_Full_INT_20160131.txt
10:23:59 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Processed 1 records in SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Concept_Full_INT_20160131.txt
10:23:59 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Processed 100000 records in SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Concept_Full_INT_20160131.txt
10:24:00 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query 6393e886-2878-4c3a-b50a-a19139c01dd4
10:24:00 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 9ms for query 6393e886-2878-4c3a-b50a-a19139c01dd4
10:24:00 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
...
10:25:29 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Processed 4700000 records in SnomedCT_RF2Release_INT_20160131/Full/Terminology/sct2_Relationship_Full_INT_20160131.txt
10:25:30 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query df9f1505-e381-4eda-a04b-934293c54721
10:25:30 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 2ms for query df9f1505-e381-4eda-a04b-934293c54721
10:25:30 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:25:40 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 2ms for query 1961cfd8-b2b0-4626-a397-6a7b925e4547
10:25:40 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 2ms for query 1961cfd8-b2b0-4626-a397-6a7b925e4547
10:25:40 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:25:41 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Looking for root codes
10:25:41 INFO c.u.f.jpa.term.TerminologyLoaderSvc - Done loading SNOMED CT files - 3 root codes, 319446 total codes
10:25:41 INFO c.u.f.jpa.term.TerminologyLoaderSvc - * Scanning for circular refs - have scanned 0 / 3 codes (0.0%)
...
10:25:43 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 81ms for query 9aa7c541-8dfa-4176-8621-046c7ba886e4
10:25:43 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 82ms for query 9aa7c541-8dfa-4176-8621-046c7ba886e4
10:25:46 INFO ca.uhn.fhir.jpa.dao.BaseHapiFhirDao - Saving history entry CodeSystem/1/_history/1
10:25:46 INFO c.u.f.j.dao.BaseHapiFhirResourceDao - Successfully created resource "CodeSystem/1/_history/1" in 3.314ms
10:25:46 INFO c.u.f.j.term.HapiTerminologySvcDstu3 - CodeSystem resource has ID: CodeSystem/1
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Storing code system
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Deleting old code system versions
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Flushing...
10:25:46 INFO c.u.f.j.term.BaseHapiTerminologySvc - Done flushing
10:25:47 INFO c.u.f.j.term.BaseHapiTerminologySvc - Validating all codes in CodeSystem for storage (this can take some time for large sets)
10:25:47 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have validated 1000 concepts
...
10:25:49 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have validated 319000 concepts
10:25:49 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving version containing 319446 concepts
10:25:49 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving code system
10:25:50 INFO c.u.f.j.term.BaseHapiTerminologySvc - Setting codesystemversion on 319446 concepts...
10:25:50 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving 319446 concepts...
10:25:50 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have processed 1/319446 concepts (0%)
10:25:56 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have processed 10000/319446 concepts (3%)
...
10:25:56 INFO c.u.f.j.term.BaseHapiTerminologySvc - Have processed 310000/319446 concepts (97%)
10:25:56 INFO c.u.f.j.term.BaseHapiTerminologySvc - Done saving concepts, flushing to database
10:25:57 INFO c.u.f.j.term.BaseHapiTerminologySvc - Done deleting old code system versions
10:25:57 INFO c.u.f.j.term.BaseHapiTerminologySvc - Note that some concept saving was deferred - still have 317446 concepts and 472410 relationships
10:25:57 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 7003ms for query a94cc0f2-b99e-4cce-9ba1-16b0b3ce70bb
10:25:57 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 7003ms for query a94cc0f2-b99e-4cce-9ba1-16b0b3ce70bb
10:25:57 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
10:26:00 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving 2000 deferred concepts...
10:26:04 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saved 2000 deferred concepts (315645 codes remain and 472410 relationships remain) in 3936ms (1ms / code)
...
10:26:25 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saving 2000 deferred concepts...
10:26:26 INFO c.u.f.j.term.BaseHapiTerminologySvc - Saved 2000 deferred concepts (305504 codes remain and 472410 relationships remain) in 852ms (0ms / code)
10:26:27 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Initial query result returned in 1ms for query c71cfc5b-02a4-4f16-a47f-15a947611b71
10:26:27 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - Query found 0 matches in 2ms for query c71cfc5b-02a4-4f16-a47f-15a947611b71
10:26:27 INFO ca.uhn.fhir.jpa.dao.SearchBuilder - The include pids are empty
UPDATE
Ok, I see those concepts on database table TRM_CONCEPT and TRM_CONCEPT_LINK. But, Is there a way to query those concepts?
I'm using a apache HttpClient and I've started seeing some INFO output on the eclipse console:
0 [main] INFO org.apache.commons.httpclient.HttpMethodDirector - I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect
3 [main] INFO org.apache.commons.httpclient.HttpMethodDirector - Retrying request
3861 [pool-1-thread-25] INFO org.apache.commons.httpclient.HttpMethodDirector - I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect
3861 [pool-1-thread-25] INFO org.apache.commons.httpclient.HttpMethodDirector - Retrying request
3913 [pool-1-thread-16] INFO org.apache.commons.httpclient.HttpMethodBase - Response content length is not known
To my knowledge, nothing has changed. How can I get rid of it?
It's probably your logging library. HttpClient likely depends on commons-logging, which automatically picks up a logging implementation in your classpath (either java.util.logging or log4j) which by default writes on the console.
I am currently attempting to use mongo-connector to automatically feed db updates to Solr. It's working fine through the use of the following command -
mongo-connector -m localhost:27017 -t http://localhost:8983/solr -d mongo_connector/doc_managers/solr_doc_manager.py
However, it is indexing every collection in my mongodb. I have tried the use of the option -n through the following -
mongo-connector -m localhost:27017 -t http://localhost:8983/solr -n feed_scraper_development.articles -d mongo_connector/doc_managers/solr_doc_manager.py
This fails with the following error -
2014-07-24 22:23:23,053 - INFO - Beginning Mongo Connector
2014-07-24 22:23:23,104 - INFO - Starting new HTTP connection (1): localhost
2014-07-24 22:23:23,110 - INFO - Finished 'http://localhost:8983/solr/admin/luke?show=schema&wt=json' (get) with body '' in 0.018 seconds.
2014-07-24 22:23:23,115 - INFO - OplogThread: Initializing oplog thread
2014-07-24 22:23:23,116 - INFO - MongoConnector: Starting connection thread MongoClient('localhost', 27017)
2014-07-24 22:23:23,126 - INFO - Finished 'http://localhost:8983/solr/update/?commit=true' (post) with body 'u'<commit ' in 0.006 seconds.
2014-07-24 22:23:23,129 - INFO - Finished 'http://localhost:8983/solr/select/?q=%2A%3A%2A&sort=_ts+desc&rows=1&wt=json' (get) with body '' in 0.003 seconds.
2014-07-24 22:23:23,337 - INFO - Finished 'http://localhost:8983/solr/select/?q=_ts%3A+%5B6038164010275176560+TO+6038164010275176560%5D&rows=100000000&wt=json' (get) with body '' in 0.207 seconds.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 808, in __bootstrap_inner
self.run()
File "build/bdist.macosx-10.9-intel/egg/mongo_connector/oplog_manager.py", line 141, in run
cursor = self.init_cursor()
File "build/bdist.macosx-10.9-intel/egg/mongo_connector/oplog_manager.py", line 582, in init_cursor
cursor = self.get_oplog_cursor(timestamp)
File "build/bdist.macosx-10.9-intel/egg/mongo_connector/oplog_manager.py", line 361, in get_oplog_cursor
timestamp = self.rollback()
File "build/bdist.macosx-10.9-intel/egg/mongo_connector/oplog_manager.py", line 664, in rollback
if doc['ns'] in rollback_set:
KeyError: 'ns'
Any help or clues would be greatly appreciated!
Extra information: Solr 4.9.0 | MongoDB 2.6.3 | mongo-connector 1.2.1
It works as advertised after deleting all the indexes in the data folder, restarting solr and re-running the command with the -n option.