How do i control mongo java driver logging using java.util.logging properties - mongodb

I am using the java mongodb driver 3.2.2 (compile group: 'org.mongodb', name: 'mongo-java-driver', version:'3.2.2) and can't seem to turn OFF the logging that is coming from the driver.
My program is as follows:
public static void main(String args[]) {
Enumeration<String> names = LogManager.getLogManager().getLoggerNames();
Logger l = Logger.getLogger( "org.mongodb.driver" );
l.info("Hello INFO!");
l.warning("Hello WARNING!");
SoundDB db = new SoundDB();
db.doMain(args);
while (names.hasMoreElements())
System.out.println("Name = " + names.nextElement());
l.info("Hello INFO!");
l.warning("Hello WARNING!");
}
and when started with -Djava.util.logging.config.file=logging.properties, produces
Oct 12, 2016 7:44:22 PM com.ibm.watson.iot.sound.tools.SoundDB main
WARNING: Hello WARNING!
Loading caa properties from file:/C:/Users/IBM_ADMIN/git/iot-sound/IoT-Sound/caa.properties
19:44:23.889 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
19:44:23.971 [main] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
19:44:24.030 [main] INFO org.mongodb.driver.cluster - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
19:44:24.042 [cluster-ClusterId{value='57fecad73df6efadcc807d9e', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:1261}] to localhost:27017
19:44:24.042 [cluster-ClusterId{value='57fecad73df6efadcc807d9e', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster - Checking status of localhost:27017
19:44:24.044 [cluster-ClusterId{value='57fecad73df6efadcc807d9e', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 2, 4]}, minWireVersion=0, maxWireVersion=4, maxDocumentSize=16777216, roundTripTimeNanos=1672627}
19:44:24.046 [cluster-ClusterId{value='57fecad73df6efadcc807d9e', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=1.7 ms, state=CONNECTED}]
...
Name = javax.management.monitor
Name = javax.management.mlet
Name = org.bson.ObjectId
Name = global
Name = org.mongodb.driver
Name = javax.management
Name = javax.management.mbeanserver
Name =
Oct 12, 2016 7:44:24 PM com.ibm.watson.iot.sound.tools.SoundDB main
WARNING: Hello WARNING!
logging.properties contains
.level=WARNING
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level=ALL
The org.mongodb.driver logger has its level set correctly to WARNING since only my warning messages are being printed out and not the info messages. There is no change (as I would expect) if I add the following to the properties:
org.bson.ObjectId.level=WARNING
org.mongodb.driver.level=WARNING
So, does anyone have any idea what I'm doing wrong? Thanks.

From: http://mongodb.github.io/mongo-java-driver/3.2/driver/reference/management/logging/
"By default, logging is enabled via the popular SLF4J API. The use of SLF4J is optional; the driver will use SLF4J if the driver detects the presence of SLF4J in the classpath. Otherwise, the driver will fall back to JUL (java.util.logging)"
Make sure that you have no slf4j dependency in your classpath (directly or through other libs). In case you have slf4j you need to configure slf4j instead of java logging to set-up log level.
slf4j is just logging API, actual logging could be backed by any implementation (JUG, Log4J, logback). See https://dzone.com/articles/how-configure-slf4j-different for additional info. What is actually using is depends on your classpath. If you use maven you could find it by getting dependency hierarchy.

Related

Using Liquibase Mongodb extension with Quarkus

Trying to use liquibase-mongodb extension with Quarkus. Without any sucess.
Anyone able to guide me to some working example?
application.yaml contents:
quarkus:
mongodb:
connection-string: mongodb://localhost:27017
write-concern:
journal: false
database: foo1
liquibase:
migrate-at-start: true
change-log: db/changeLog.yaml
db/changeLog.yaml contents:
databaseChangeLog:
- include:
file: changesets/foo.json
build.gradle contains:
implementation "io.quarkus:quarkus-liquibase"
implementation "org.liquibase.ext:liquibase-mongodb:${liquibaseVersion}"
implementation "org.mongodb:mongodb-driver-sync:${mongodbVersion}"
output:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Slf4jLoggerFactory]
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2021-03-07 12:26:39,254 WARN [io.qua.dep.QuarkusAugmentor] (main) Using Java versions older than 11 to build Quarkus applications is deprecated and will be disallowed in a future release!
2021-03-07 12:26:39,573 WARN [io.qua.agr.dep.AgroalProcessor] (build-24) The Agroal dependency is present but no JDBC datasources have been defined.
2021-03-07 12:26:40,583 INFO [org.mon.dri.cluster] (Quarkus Main Thread) Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2021-03-07 12:26:40,595 INFO [org.mon.dri.cluster] (Quarkus Main Thread) Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2021-03-07 12:26:40,617 INFO [org.mon.dri.connection] (cluster-rtt-ClusterId{value='6044b870c8687d11c71dfb0b', description='null'}-localhost:27017) Opened connection [connectionId{localValue:1, serverValue:5}] to localhost:27017
2021-03-07 12:26:40,617 INFO [org.mon.dri.connection] (cluster-ClusterId{value='6044b870c8687d11c71dfb0b', description='null'}-localhost:27017) Opened connection [connectionId{localValue:2, serverValue:6}] to localhost:27017
2021-03-07 12:26:40,617 INFO [org.mon.dri.cluster] (cluster-ClusterId{value='6044b870c8687d11c71dfb0b', description='null'}-localhost:27017) Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=11510711}
2021-03-07 12:26:40,620 INFO [org.mon.dri.connection] (cluster-ClusterId{value='6044b870c8687d11c71dfb0c', description='null'}-localhost:27017) Opened connection [connectionId{localValue:3, serverValue:8}] to localhost:27017
2021-03-07 12:26:40,620 INFO [org.mon.dri.cluster] (cluster-ClusterId{value='6044b870c8687d11c71dfb0c', description='null'}-localhost:27017) Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=3728082}
2021-03-07 12:26:40,621 INFO [org.mon.dri.connection] (cluster-rtt-ClusterId{value='6044b870c8687d11c71dfb0c', description='null'}-localhost:27017) Opened connection [connectionId{localValue:4, serverValue:7}] to localhost:27017
2021-03-07 12:26:40,704 INFO [io.quarkus] (Quarkus Main Thread) foo-app 0.0.1-SNAPSHOT on JVM (powered by Quarkus 1.11.3.Final) started in 1.524s. Listening on: http://localhost:8080
2021-03-07 12:26:40,705 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2021-03-07 12:26:40,705 INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, config-yaml, liquibase, mongodb-client, mongodb-panache, mongodb-rest-data-panache, mutiny, narayana-jta, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-openapi, swagger-ui]
so liquibase is known to quarkus, but mongodb changesets are not executed.
The Quarkus Liquibase extension only targets JDBC datasources for now.
Probably worth opening an enhancement request in our tracker so that we track this need, if it hasn't already been done.
Due to lack of official support, ended up with custom implementation, that is all I needed:
import java.util.Optional;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import org.eclipse.microprofile.config.inject.ConfigProperty;
import io.quarkus.runtime.StartupEvent;
import liquibase.Liquibase;
import liquibase.database.Database;
import liquibase.database.DatabaseFactory;
import liquibase.ext.mongodb.database.MongoLiquibaseDatabase;
import liquibase.resource.ClassLoaderResourceAccessor;
import lombok.SneakyThrows;
#ApplicationScoped
public class MongoDBMigration {
#ConfigProperty(name = "quarkus.mongodb.connection-string")
String connectionString;
#ConfigProperty(name = "quarkus.mongodb.credentials.username")
Optional<String> username;
#ConfigProperty(name = "quarkus.mongodb.credentials.password")
Optional<String> password;
#ConfigProperty(name = "quarkus.liquibase.migrate-at-start")
boolean liquibaseEnabled;
#SneakyThrows
void onStart(#Observes StartupEvent ev) {
if (liquibaseEnabled) {
Database database = (MongoLiquibaseDatabase) DatabaseFactory.getInstance().openDatabase(connectionString, username.orElse(null), password.orElse(null), null, null);
Liquibase liquiBase = new Liquibase("db/changeLog.json", new ClassLoaderResourceAccessor(), database);
liquiBase.update("");
}
}
}
having application.yaml:
quarkus:
mongodb:
connection-string: mongodb://localhost:27017/foo?socketTimeoutMS=1000&connectTimeoutMS=1000&serverSelectionTimeoutMS=1000
write-concern:
journal: false
database: foo
liquibase:
migrate-at-start: true
change-log: db/changeLog.xml
and build.gradle:
dependencies {
implementation "io.quarkus:quarkus-liquibase"
implementation "org.liquibase.ext:liquibase-mongodb:${liquibaseVersion}"
implementation "org.mongodb:mongodb-driver-sync:${mongodbVersion}"

com.hazelcast.client.AuthenticationException: Invalid credentials! Principal :null

I have configured my multi-cluster Hazelcast server on Kubernetes via the Kubernetes API discovery strategy. (Please see Two separate hazelcast clusters in kubernetes) And the members of each cluster are successfully discovering each other.
My client project is running on the k8s cluster as my Hazelcast server.
I have added the following dependency to my client project pom:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-kubernetes</artifactId>
<version>1.3.1</version>
</dependency>
I have configured my Hazelcast client as given in the official documentation:
clientConfig.getNetworkConfig().getKubernetesConfig()
.setEnabled(true)
.setProperty("namespace", "default")
.setProperty("service-name", "xyz");
(I have a namespace called "default" and k8s service object named "xyz")
These are the logs on client startup. Although it recognized the Hazelcast server pod, it gave an AuthenticationException (as expanded below). Also, want to point out that it did not try to connect to the correct port.
2019-09-18 12:59:36,699 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.client.HazelcastClient (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release.
2019-09-18 12:59:36,709 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.core.LifecycleService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is STARTING
2019-09-18 12:59:36,977 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.spi.discovery.integration.DiscoveryService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: xyz, service-port: 0, service-label: null, service-label-value: true, namespace: default, resolve-not-ready-addresses: false, kubernetes-master: https://kubernetes.default.svc}
2019-09-18 12:59:36,980 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.spi.discovery.integration.DiscoveryService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Kubernetes Discovery activated resolver: ServiceEndpointResolver
2019-09-18 12:59:36,999 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.client.spi.ClientInvocationService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Running with 2 response threads
2019-09-18 12:59:37,060 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.core.LifecycleService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is STARTED
2019-09-18 12:59:37,390 [instance=local-service_01.devciny-dock] [local-service_01.devciny-dock.cluster-] INFO com.hazelcast.client.connection.ClientConnectionManager (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Trying to connect to [10.42.1.111]:5701 as owner member
2019-09-18 12:59:37,432 [instance=local-service_01.devciny-dock] [local-service_01.devciny-dock.internal-3] WARN com.hazelcast.client.connection.nio.ClientConnection (Slf4jFactory.java:67) - local-service_01.devciny-dock [instance_identifier] [3.11.1] ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.42.1.121:39003->/10.42.1.111:5701}, remoteEndpoint=null, lastReadTime=2019-09-18 12:59:37.426, lastWriteTime=2019-09-18 12:59:37.425, closedTime=2019-09-18 12:59:37.431, connected server version=null} closed. Reason: com.hazelcast.client.AuthenticationException[Invalid credentials! Principal: null]
com.hazelcast.client.AuthenticationException: Invalid credentials! Principal: null
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl$AuthCallback.onResponse(ClientConnectionManagerImpl.java:747)
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl$AuthCallback.onResponse(ClientConnectionManagerImpl.java:702)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:130)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:118)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:130)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:118)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:255)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
Your Hazelcast client tries to connect to 10.42.1.111:5701 and it find Hazelcast server there. The port looks correct and it correctly finds the Hazelcast server there.
What happens next is that it cannot authenticate with the server, which probably means that you didn't specify the cluster password in your Hazelcast configuration. You can read more on how to do it in this StackOverflow question.
You didn't share the most important part of configuration related to client authentication. That is the group config of clusters and group config from your client. I anticipate, the problem is rooted there.
The default authentication compares group name on the member with username coming from the client. The username is filled by the client's group name (by default).
Check the AuthenticationBaseMessageTask code
private AuthenticationStatus authenticate(UsernamePasswordCredentials credentials) {
GroupConfig groupConfig = nodeEngine.getConfig().getGroupConfig();
String nodeGroupName = groupConfig.getName();
boolean usernameMatch = nodeGroupName.equals(credentials.getUsername());
return usernameMatch ? AuthenticationStatus.AUTHENTICATED : AuthenticationStatus.CREDENTIALS_FAILED;
}

intelij datagrip doesn't open mongo collection

Since the last update to datagrip 2019.1.3, datagrip doesn't open mongo collections anymore.
Double clicking any collection loads the collection's overview but not the data view.
The database is successfully connected, mongo shell and all else work, but the data isn't shown.
Here's an excerpt of datagrip's logs
2019-06-08 13:11:10,907 [ 680986] INFO - org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2019-06-08 13:11:10,907 [ 680986] INFO - org.mongodb.driver.cluster - Adding discovered server localhost:27017 to client view of cluster
2019-06-08 13:11:10,908 [ 680987] INFO - org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out
2019-06-08 13:11:10,911 [ 680990] INFO - org.mongodb.driver.connection - Opened connection [connectionId{localValue:33, serverValue:143}] to localhost:27017
2019-06-08 13:11:10,911 [ 680990] INFO - org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 6]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=523564}
2019-06-08 13:11:10,912 [ 680991] INFO - org.mongodb.driver.cluster - Discovered cluster type of STANDALONE
2019-06-08 13:11:10,914 [ 680993] INFO - org.mongodb.driver.connection - Opened connection [connectionId{localValue:34, serverValue:144}] to localhost:27017
2019-06-08 13:11:10,917 [ 680996] INFO - org.mongodb.driver.connection - Closed connection [connectionId{localValue:34, serverValue:144}] to localhost:27017 because the pool has been closed.
I encountered the same bug some months ago, that time simply rolling back the version helped, but now whatever I rollback datagrip to, the collection's data isn't shown any more. I've restarted my mac, invalidated caches and restart datagrip (multiple times). Nothing helped.
How to make my premium priced product work like a premium priced product?
I would try the latest EAP from JetBrains. Since October 04 2019 they also support simple MongoDB interactions.
You can find out more on the 2019.3 EAP page.
Another alternative to viewing your MongoDB is Robo 3T. This is also free in the non studio version.
Edit:
Now Jetbrains Datagrip natively supports MongoDB.

GraphML inport into Titan

I'm new in Titan world. I would like to import data stored in GraphML file into a database.
I downloaded titan-1.0.0-hadoop1
I run ./titan.sh
I run ./gremlin.sh
In Gremlin console I wrote:
:remote connect tinkerpop.server ../conf/remote.yaml
Next, I wrote:
graph.io(IoCore.graphml()).readGraph("/tmp/file.graphml")
I got message:
No such property: graph for class: groovysh_evaluate
Could you help me?
IMO the most interesting logs from gremlin-server.log:
84 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from conf/gremlin-server/gremlin-server.yaml
158 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
160 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
196 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
197 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
1111 [main] WARN org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] configured at [conf/gremlin-server/titan-berkeleyje-server.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class com.thinkaurelius.titan.core.TitanFactory]
...
1113 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
1499 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded nashorn ScriptEngine
2044 [main] INFO org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines - Loaded gremlin-groovy ScriptEngine
2488 [main] WARN org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor - Could not initialize gremlin-groovy ScriptEngine with scripts/empty-sample.groovy as script could not be evaluated - javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: graph for class: Script1
2488 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized GremlinExecutor and configured ScriptEngines.
2581 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
2582 [main] INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
2719 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
2720 [main] WARN org.apache.tinkerpop.gremlin.server.AbstractChannelizer - Could not instantiate configured serializer class - org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0 - it will not be available. There is no graph named [graph] configured to be used in the useMapperFromGraph setting
...
You need to create a graph. the graph keyword isn't declared anywhere in your script.
This is briefly covered in the Titan Server documentation, but it is easily overlooked.
The :> is the "submit" command which sends the Gremlin on that line to the currently active remote.
In step 5, you need to submit your script command to the remote server. In the Gremlin Console, you do this by starting your command with :submit or :> for shorthand.
:> graph.io(IoCore.graphml()).readGraph("/tmp/file.graphml")
If you don't submit the script to the remote server, the Gremlin Console will attempt to process the script within the console's JVM. graph is not defined locally, and that is why you saw the error in step 6.
Update: Based on your gremlin-server.log it looks like the issue is that the user that starts Titan with ./bin/titan.sh start doesn't have the appropriate file permissions to create the directory (db/berkeley) used by the default graph configuration (titan-berkeleyje-server.properties). Try updating the file permissions on the $TITAN_HOME directory.

404 Error while deploying simple web-app in JBoss AS 6 and JBoss AS 7?

I followed this blog for injecting EJB in REST layer.
Here is the code that I tried deploying in JBOSS AS 6 and 7 using Eclipse:
REST:
package com.example.rest;
import javax.ejb.EJB;
import javax.ejb.Stateless;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
#Stateless
#Path("current")
public class ServiceFacade {
#EJB
ServiceImpl service;
#GET
public String getDate(){
return service.getCurrentDate().toString();
}
}
EJB:
import java.util.Date;
import javax.ejb.Stateless;
#Stateless
public class ServiceImpl {
public Date getCurrentDate(){
return new Date();
}
}
#ApplicationPath("rest")
public class RestApplication extends Application {
}
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.sample.rest</groupId>
<artifactId>restejb</artifactId>
<version>0.0.1-SNAPSHOT</version>
<description>simplet project to test ejb injection in rest</description>
<dependencies>
<dependency>
<groupId>org.jboss.spec</groupId>
<artifactId>jboss-javaee-6.0</artifactId>
<version>1.0.0.Final</version>
<packaging>war</packaging>
<type>pom</type>
<scope>provided</scope>
</dependency>
</dependencies>
</project>
when I access http://localhost:8080/restejb/rest/current, I get 404 page NOT found error.
Here is the log from deployment to JBOSS AS 6:
11:19:23,701 INFO [AbstractJBossASServerBase] Server Configuration:
JBOSS_HOME URL: file:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/
Bootstrap: $JBOSS_HOME/server/default/conf/bootstrap.xml
Common Base: $JBOSS_HOME/common/
Common Library: $JBOSS_HOME/common/lib/
Server Name: default
Server Base: $JBOSS_HOME/server/
Server Library: $JBOSS_HOME/server/default/lib/
Server Config: $JBOSS_HOME/server/default/conf/
Server Home: $JBOSS_HOME/server/default/
Server Data: $JBOSS_HOME/server/default/data/
Server Log: $JBOSS_HOME/server/default/log/
Server Temp: $JBOSS_HOME/server/default/tmp/
11:19:23,706 INFO [AbstractServer] Starting: JBossAS [6.1.0.Final "Neo"]
11:19:27,412 INFO [ServerInfo] Java version: 1.7.0_71,Oracle Corporation
11:19:27,412 INFO [ServerInfo] Java Runtime: Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
11:19:27,413 INFO [ServerInfo] Java VM: Java HotSpot(TM) 64-Bit Server VM 24.71-b01,Oracle Corporation
11:19:27,413 INFO [ServerInfo] OS-System: Mac OS X 10.9.5,x86_64
11:19:27,414 INFO [ServerInfo] VM arguments: -Dprogram.name=JBossTools: JBoss AS 6.x -Xms256m -Xmx768m -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.endorsed.dirs=/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/lib/endorsed -Djava.library.path=/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/bin/native -Dlogging.configuration=file:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/bin/logging.properties -Dfile.encoding=UTF-8
11:19:27,483 INFO [JMXKernel] Legacy JMX core initialized
11:19:35,849 INFO [AbstractServerConfig] JBoss Web Services - Stack CXF Server 3.4.1.GA
11:19:36,679 INFO [JSFImplManagementDeployer] Initialized 3 JSF configurations: [Mojarra-1.2, MyFaces-2.0, Mojarra-2.0]
11:19:47,865 WARNING [FileConfigurationParser] AIO wasn't located on this platform, it will fall back to using pure Java NIO. If your platform is Linux, install LibAIO to enable the AIO journal
11:19:48,389 INFO [JMXConnector] starting JMXConnector on host localhost:1090
11:19:48,609 INFO [MailService] Mail Service bound to java:/Mail
11:19:49,990 INFO [HornetQServerImpl] live server is starting with configuration HornetQ Configuration (clustered=false,backup=false,sharedStore=true,journalDirectory=/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/data/hornetq/journal,bindingsDirectory=/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/data/hornetq/bindings,largeMessagesDirectory=/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/data/hornetq/largemessages,pagingDirectory=/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/data/hornetq/paging)
11:19:49,992 INFO [HornetQServerImpl] Waiting to obtain live lock
11:19:50,105 INFO [JournalStorageManager] Using NIO Journal
11:19:50,140 WARNING [HornetQServerImpl] Security risk! It has been detected that the cluster admin user and password have not been changed from the installation default. Please see the HornetQ user guide, cluster chapter, for instructions on how to do this.
11:19:50,468 INFO [FileLockNodeManager] Waiting to obtain live lock
11:19:50,469 INFO [FileLockNodeManager] Live Server Obtained live lock
11:19:51,283 INFO [NettyAcceptor] Started Netty Acceptor version 3.2.3.Final-r${buildNumber} localhost:5445 for CORE protocol
11:19:51,287 INFO [NettyAcceptor] Started Netty Acceptor version 3.2.3.Final-r${buildNumber} localhost:5455 for CORE protocol
11:19:51,290 INFO [HornetQServerImpl] Server is now live
11:19:51,291 INFO [HornetQServerImpl] HornetQ Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [251821f6-c6bb-11e4-9df3-60334b2115c1] started
11:19:51,386 INFO [WebService] Using RMI server codebase: http://localhost:8083/
11:19:51,699 INFO [jbossatx] ARJUNA-32010 JBossTS Recovery Service (tag: JBOSSTS_4_14_0_Final) - JBoss Inc.
11:19:51,711 INFO [arjuna] ARJUNA-12324 Start RecoveryActivators
11:19:51,745 INFO [arjuna] ARJUNA-12296 ExpiredEntryMonitor running at Tue, 10 Mar 2015 11:19:51
11:19:51,903 INFO [arjuna] ARJUNA-12310 Recovery manager listening on endpoint 127.0.0.1:4712
11:19:51,904 INFO [arjuna] ARJUNA-12344 RecoveryManagerImple is ready on port 4712
11:19:51,905 INFO [jbossatx] ARJUNA-32013 Starting transaction recovery manager
11:19:51,933 INFO [arjuna] ARJUNA-12163 Starting service com.arjuna.ats.arjuna.recovery.ActionStatusService on port 4713
11:19:51,934 INFO [arjuna] ARJUNA-12337 TransactionStatusManagerItem host: 127.0.0.1 port: 4713
11:19:51,937 INFO [arjuna] ARJUNA-12170 TransactionStatusManager started on port 4713 and host 127.0.0.1 with service com.arjuna.ats.arjuna.recovery.ActionStatusService
11:19:52,007 INFO [jbossatx] ARJUNA-32017 JBossTS Transaction Service (JTA version - tag: JBOSSTS_4_14_0_Final) - JBoss Inc.
11:19:52,103 INFO [arjuna] ARJUNA-12202 registering bean jboss.jta:type=ObjectStore.
11:19:52,481 INFO [AprLifecycleListener] The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/bin/native
11:19:52,764 INFO [TomcatDeployment] deploy, ctxPath=/invoker
11:19:53,207 INFO [ModClusterService] Initializing mod_cluster 1.1.0.Final
11:19:53,286 INFO [RARDeployment] Required license terms exist, view vfs:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml
11:19:53,308 INFO [RARDeployment] Required license terms exist, view vfs:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml
11:19:53,322 INFO [RARDeployment] Required license terms exist, view vfs:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/deploy/jms-ra.rar/META-INF/ra.xml
11:19:53,347 INFO [HornetQResourceAdapter] HornetQ resource adaptor started
11:19:53,359 INFO [RARDeployment] Required license terms exist, view vfs:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/deploy/mail-ra.rar/META-INF/ra.xml
11:19:53,382 INFO [RARDeployment] Required license terms exist, view vfs:/Users/sridhar1982AQ/Documents/EE7_servers/jboss-6.1.0.Final/server/default/deploy/quartz-ra.rar/META-INF/ra.xml
11:19:53,522 INFO [SimpleThreadPool] Job execution threads will use class loader of thread: Thread-2
11:19:53,574 INFO [SchedulerSignalerImpl] Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
11:19:53,575 INFO [QuartzScheduler] Quartz Scheduler v.1.8.3 created.
11:19:53,579 INFO [RAMJobStore] RAMJobStore initialized.
11:19:53,583 INFO [QuartzScheduler] Scheduler meta-data: Quartz Scheduler (v1.8.3) 'JBossQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
11:19:53,583 INFO [StdSchedulerFactory] Quartz scheduler 'JBossQuartzScheduler' initialized from an externally opened InputStream.
11:19:53,583 INFO [StdSchedulerFactory] Quartz scheduler version: 1.8.3
11:19:53,584 INFO [QuartzScheduler] Scheduler JBossQuartzScheduler_$_NON_CLUSTERED started.
11:19:54,133 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name 'java:DefaultDS'
11:19:54,533 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name 'java:JmsXA'
11:19:54,715 INFO [xnio] XNIO Version 2.1.0.CR2
11:19:54,733 INFO [nio] XNIO NIO Implementation Version 2.1.0.CR2
11:19:55,100 INFO [remoting] JBoss Remoting version 3.1.0.Beta2
11:19:55,279 INFO [TomcatDeployment] deploy, ctxPath=/
11:19:55,417 INFO [HornetQServerImpl] trying to deploy queue jms.queue.ExpiryQueue
11:19:55,462 INFO [HornetQServerImpl] trying to deploy queue jms.queue.DLQ
11:19:55,508 INFO [service] Removing bootstrap log handlers
11:19:55,616 INFO [org.apache.coyote.http11.Http11Protocol] Starting Coyote HTTP/1.1 on http-localhost%2F127.0.0.1-8080
11:19:55,624 INFO [org.apache.coyote.ajp.AjpProtocol] Starting Coyote AJP/1.3 on ajp-localhost%2F127.0.0.1-8009
11:19:55,625 INFO [org.jboss.bootstrap.impl.base.server.AbstractServer] JBossAS [6.1.0.Final "Neo"] Started in 31s:909ms
11:19:56,100 INFO [org.jboss.web.tomcat.service.deployers.TomcatDeployment] deploy, ctxPath=/restejb
I am checking one of my projects using rest and in the annotation I use a slash / before the resource name both in #ApplicationPath("/rest") as in the rest service `#Path("/current"), so your EJB seems like must be something like:
#Stateless
#Path("/current")
public class ServiceImpl {
#GET
public Date getCurrentDate(){
return new Date();
}
}
And your activator class something like:
#ApplicationPath("/rest")
public class RestApplication extends Application {
}