Trying to override the JUL logger used by vert.x - vert.x

I'd like to use Log4j2 and am setting up my MainVerticle with system properties to do that,
public class MainVerticle extends AbstractVerticle
{
static {
System.setProperty( "vertx.logger-delegate-factory-class-name",
"io.vertx.core.logging.Log4j2LogDelegateFactory" );
System.setProperty( "log4j2.debug", "true" );
}
....
}
I then deploy my HttpVerticle from this verticle, and in the HTTP verticle, I'm trying to use parameterized statements, which aren't working. So I added a couple of logging statements to show the logger delegate in-use, as well as the system property:
public class HttpServerVerticle extends AbstractVerticle
{
private static final Logger LOGGER = LoggerFactory.getLogger( HttpServerVerticle.class );
#Override
public void start() throws Exception
{
LOGGER.info( System.getProperty( "vertx.logger-delegate-factory-class-name" ) );
LOGGER.info( LOGGER.getDelegate().getClass().getName() );
....
And below in a handler of the incoming message, I'm using this:
LOGGER.info( "Chat message received: {}" + message.body(), message.body() );
Note that I'm adding the message.body() using concatenation to prove that the message is not an empty string.
The output of these log statements are:
[INFO] Sep 24, 2018 2:46:09 AM ca.LinkEdTutoring.chat.http.HttpServerVerticle
[INFO] INFO: io.vertx.core.logging.Log4j2LogDelegateFactory
[INFO] Sep 24, 2018 2:46:09 AM ca.LinkEdTutoring.chat.http.HttpServerVerticle
[INFO] INFO: io.vertx.core.logging.JULLogDelegate
and for an incoming message of the letter "b":
[INFO] INFO: Chat message received: {}b
I've tried setting the system properties in the pom.xml file and on the command line with -D arguments.
This is with vert.x 3.5.3
Any thoughts on what I've forgotten to do?
================
EDIT: capturing the salient points from the comment thread.
cannot set system properties in a verticle, because the vert.x JUL logger gets initiated before the main verticle.
cannot set ... in the pom.xml when running the code with the vertx plugin. The mvn vertx plugin must get invoked after vertx is initialized.
only way it seems possible to override the JUL logger is the command line, using -D vargs.
do not forget that vargs are set before the -jar switch, i.e., $ java -Dx=y -jar jarname.jar

If you use the command line to start, you can configure it with -Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.Log4j2LogDelegateFactory. This is the easiest.
Of course, you can also set it directly in the code via System.setProperty, which is the same as the -D setting, but this must be done before the LoggerFactory is initialized. Obviously your code in the Verticle subclass must be executed after the Vertx initialization is successful. The LoggerFactory has already been initialized.

Related

Error when trying to access Kafka with Quarkus in native mode

I tried a simple sample code to test access to a "kerberized" Kafka from Quarkus 2.2.2 with smallrye-reactive-messaging-kafka :
package org.acme;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.eclipse.microprofile.reactive.messaging.Incoming;
import javax.enterprise.context.ApplicationScoped;
#ApplicationScoped
public class MyTopicConsumer {
#Incoming("in")
public void consume(ConsumerRecord<String, String> record) {
System.out.println("read from Kafka : " + record.value() ) ;
}
}
Kafkas is behind Kerberos, so i used an application.properties like this :
quarkus.ssl.native=true
quarkus.native.enable-all-security-services=true
mp.messaging.incoming.in.group.id=my-group
mp.messaging.incoming.in.auto.commit.interval.ms=1000
mp.messaging.incoming.in.security.protocol=SASL_SSL
mp.messaging.incoming.in.sasl.kerberos.service.name=kafka
mp.messaging.incoming.in.sasl.mechanism=GSSAPI
mp.messaging.incoming.in.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule "required" doNotPrompt=true useKeyTab=true storeKey=true serviceName="kafka" keyTab="<keytab>" principal="<principal>" useTicketCache=false;
mp.messaging.incoming.in.ssl.truststore.location=<location>
mp.messaging.incoming.in.ssl.truststore.password=<password>
mp.messaging.incoming.in.connector=smallrye-kafka
mp.messaging.incoming.in.topic=<topic>
mp.messaging.incoming.in.auto.offset.reset=earliest
mp.messaging.incoming.in.enable.auto.commit=false
mp.messaging.incoming.in.bootstrap.servers=<list of servers>
It works nicely in jvm mode, but fails in native mode (graalvm-ce-java11-21.2.0) with this error :
ERROR [io.sma.rea.mes.provider] (main) SRMSG00230: Unable to create the publisher or subscriber during initialization: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:665)
at io.smallrye.reactive.messaging.kafka.impl.ReactiveKafkaConsumer.<init>(ReactiveKafkaConsumer.java:80)
at io.smallrye.reactive.messaging.kafka.impl.KafkaSource.<init>(KafkaSource.java:85)
at io.smallrye.reactive.messaging.kafka.KafkaConnector.getPublisherBuilder(KafkaConnector.java:182)
at io.smallrye.reactive.messaging.kafka.KafkaConnector_ClientProxy.getPublisherBuilder(KafkaConnector_ClientProxy.zig:159)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory.createPublisherBuilder(ConfiguredChannelFactory.java:190)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory.register(ConfiguredChannelFactory.java:153)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory.initialize(ConfiguredChannelFactory.java:125)
at io.smallrye.reactive.messaging.impl.ConfiguredChannelFactory_ClientProxy.initialize(ConfiguredChannelFactory_ClientProxy.zig:189)
at java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at io.smallrye.reactive.messaging.extension.MediatorManager.start(MediatorManager.java:189)
at io.smallrye.reactive.messaging.extension.MediatorManager_ClientProxy.start(MediatorManager_ClientProxy.zig:220)
at io.quarkus.smallrye.reactivemessaging.runtime.SmallRyeReactiveMessagingLifecycle.onApplicationStart(SmallRyeReactiveMessagingLifecycle.java:41)
at io.quarkus.smallrye.reactivemessaging.runtime.SmallRyeReactiveMessagingLifecycle_Observer_onApplicationStart_4e8937813d9e8faff65c3c07f88fa96615b70e70.notify(SmallRyeReactiveMessagingLifecycle_Observer_onApplicationStart_4e8937813d9e8faff65c3c07f88fa96615b70e70.zig:111)
at io.quarkus.arc.impl.EventImpl$Notifier.notifyObservers(EventImpl.java:300)
at io.quarkus.arc.impl.EventImpl$Notifier.notify(EventImpl.java:282)
at io.quarkus.arc.impl.EventImpl.fire(EventImpl.java:70)
at io.quarkus.arc.runtime.ArcRecorder.fireLifecycleEvent(ArcRecorder.java:128)
at io.quarkus.arc.runtime.ArcRecorder.handleLifecycleEvents(ArcRecorder.java:97)
at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy_0(LifecycleEventsBuildStep$startupEvent1144526294.zig:87)
at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy(LifecycleEventsBuildStep$startupEvent1144526294.zig:40)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:623)
at io.quarkus.runtime.Application.start(Application.java:101)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:101)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:66)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:42)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:119)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Could not find a public no-argument constructor for org.apache.kafka.common.security.kerberos.KerberosLogin
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:184)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:737)
... 30 more
I tried a few changes suggested by some posts, but with no effect.
Can anyone suggest how to fix or workaround this ?
Thanks
It is seems that the code doesn't contain the constructor org.apache.kafka.common.security.kerberos.KerberosLogin.
Have you try to add the class as describe in https://quarkus.io/guides/writing-native-applications-tips#registering-for-reflection
Maybe you need to add line in your configuration file
quarkus.native.additional-build-args=-H:ReflectionConfigurationFiles=reflection-config.json
And add the class org.apache.kafka.common.security.kerberos.KerberosLogin in reflection-config.json as describe here https://quarkus.io/guides/writing-native-applications-tips#using-a-configuration-file

Kafka testcontainer not running

I am trying to setup an integration test env for debezium integration (following the instructions in this example) but the test container (default image: confluentinc/cp-kafka:5.2.1) doesn't start but throws an exception.
I am using below mentioned code to create a KafkaContainer bean
#Bean
public KafkaContainer kafkaContainer() {
if (kafkaContainer == null) {
kafkaContainer = new KafkaContainer()
.withNetwork(network())
.withExternalZookeeper("172.17.0.2:2181");
kafkaContainer.start();
}
}
return kafkaContainer;
}
it throws following exception.
***************************
APPLICATION FAILED TO START
***************************
Description:
An attempt was made to call a method that does not exist. The attempt was made from the following location:
org.testcontainers.containers.KafkaContainer.getBootstrapServers(KafkaContainer.java:91)
The following method did not exist:
org/testcontainers/containers/KafkaContainer.getHost()Ljava/lang/String;
The method's class, org.testcontainers.containers.KafkaContainer, is available from the following locations:
jar:file:/home/shubham/.m2/repository/org/testcontainers/kafka/1.14.3/kafka-1.14.3.jar!/org/testcontainers/containers/KafkaContainer.class
The class hierarchy was loaded from the following locations:
org.testcontainers.containers.KafkaContainer: file:/home/shubham/.m2/repository/org/testcontainers/kafka/1.14.3/kafka-1.14.3.jar
org.testcontainers.containers.GenericContainer: file:/home/shubham/.m2/repository/org/testcontainers/testcontainers/1.12.5/testcontainers-1.12.5.jar
org.testcontainers.containers.FailureDetectingExternalResource: file:/home/shubham/.m2/repository/org/testcontainers/testcontainers/1.12.5/testcontainers-1.12.5.jar
Action:
Correct the classpath of your application so that it contains a single, compatible version of org.testcontainers.containers.KafkaContainer
2020-09-10 01:09:49.937 ERROR 72507 --- [ main] o.s.test.context.TestContextManager : Caught exception while allowing TestExecutionListener
Used an older version of org.testcontainers in maven dependencies and it worked. Thanks!

How do I run a beam class in dataflow which access google sql instance?

When i run my pipeline from local machine, i can update the table which resides in the cloud Sql instance. But, when i moved this to run using DataflowRunner, the same is failing with the below exception.
To connect from my eclipse, I created the data source config as
.create("com.mysql.jdbc.Driver", "jdbc:mysql://<ip of sql instance > :3306/mydb") .
The same i changed to
.create("com.mysql.jdbc.GoogleDriver", "jdbc:google:mysql://<project-id>:<instance-name>/my-db") while running through the Dataflow runner.
Should i prefix the zone information of the instance to ?
The exception i get when i run this is given below :
Jun 22, 2017 6:53:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:23:51.583Z: (840be37ab35d3d0d): Starting 2 workers in us-central1-f...
Jun 22, 2017 6:53:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:23:51.634Z: (dabfae1dc9365d10): Executing operation JdbcIO.Read/Create.Values/Read(CreateSource)+JdbcIO.Read/ParDo(Read)+JdbcIO.Read/ParDo(Anonymous)+JdbcIO.Read/GroupByKey/Reify+JdbcIO.Read/GroupByKey/Write
Jun 22, 2017 6:54:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:24:44.762Z: (21395b94f8bf7f61): Workers have started successfully.
SEVERE: 2017-06-22T13:25:30.214Z: (3b988386f963503e): java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot load JDBC driver class 'com.mysql.jdbc.GoogleDriver'
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:125)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:105)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:92)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot load JDBC driver class 'com.mysql.jdbc.GoogleDriver'
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$M7MKjX9p.invokeSetup(Unknown Source)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:65)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:47)
at com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:100)
at com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:365)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Any help to fix this is really appreciated. This is my first attempt to run a beam pipeline as a dataflow job.
PipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
((DataflowPipelineOptions) options).setNumWorkers(2);
((DataflowPipelineOptions)options).setProject("xxxxx");
((DataflowPipelineOptions)options).setStagingLocation("gs://xxxx/staging");
((DataflowPipelineOptions)options).setRunner(DataflowRunner.class);
((DataflowPipelineOptions)options).setStreaming(false);
options.setTempLocation("gs://xxxx/tempbucket");
options.setJobName("sqlpipeline");
PCollection<Account> collection = dataflowPipeline.apply(JdbcIO.<Account>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration
.create("com.mysql.jdbc.GoogleDriver", "jdbc:google:mysql://project-id:testdb/db")
.withUsername("root").withPassword("root"))
.withQuery(
"select account_id,account_parent,account_description,account_type,account_rollup,Custom_Members from account")
.withCoder(AvroCoder.of(Account.class)).withStatementPreparator(new JdbcIO.StatementPreparator() {
public void setParameters(PreparedStatement preparedStatement) throws Exception {
preparedStatement.setFetchSize(1);
preparedStatement.setFetchDirection(ResultSet.FETCH_FORWARD);
}
}).withRowMapper(new JdbcIO.RowMapper<Account>() {
public Account mapRow(ResultSet resultSet) throws Exception {
Account account = new Account();
account.setAccount_id(resultSet.getInt("account_id"));
account.setAccount_parent(resultSet.getInt("account_parent"));
account.setAccount_description(resultSet.getString("account_description"));
account.setAccount_type(resultSet.getString("account_type"));
account.setAccount_rollup("account_rollup");
account.setCustom_Members("Custom_Members");
return account;
}
}));
Have you properly pulled in the com.google.cloud.sql/mysql-socket-factory maven dependency? Looks like you are failing to load the class.
https://cloud.google.com/appengine/docs/standard/java/cloud-sql/#Java_Connect_to_your_database
Hi I think it's better to move on with "com.mysql.jdbc.Driver" because google driver is supporting for app engine deployments
So as it goes this is what my pipeline configurations look alike and it works perfectly fine for me
PCollection < KV < Double, Double >> exchangeRates = p.apply(JdbcIO. < KV < Double, Double >> read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("com.mysql.jdbc.Driver", "jdbc:mysql://ip:3306/dbname?user=root&password=root&useUnicode=true&characterEncoding=UTF-8")
)
.withQuery(
"SELECT PERIOD_YEAR, PERIOD_YEAR FROM SALE")
.withCoder(KvCoder.of(DoubleCoder.of(), DoubleCoder.of()))
.withRowMapper(new JdbcIO.RowMapper < KV < Double, Double >> () {
#Override
public KV<Double, Double> mapRow(java.sql.ResultSet resultSet) throws Exception {
LOG.info(resultSet.getDouble(1)+ "Came");
return KV.of(resultSet.getDouble(1), resultSet.getDouble(2));
}
}));
Hope it will help

Custom MessageBodyReader not found in JerseyTest

I am having a bizarre problem with my JerseyTest class.
When executing my test code and putting a break point on line 203 of org.glassfish.jersey.message.internal.ReaderInterceptorExecutor, I see that my reader is not in reader.workers. However, as you can see below, I register this MessageBodyReader in the ResourceConfig.
All relevant code is provided below.
My custom MessageBodyReader/Writer
#Provider
#Produces({V1_JSON})
#Consumes({V1_JSON})
public class JsonMessageBodyHandlerV1
implements
MessageBodyWriter<Object>,
MessageBodyReader<Object> {
...
}
And yes, isReadable returns true.
When debugging, I see that the code hits writeTo but not readFrom.
Test code that fails
public class TestLocationResource extends JerseyTest {
public static class LocationResourceHK2Binder extends AbstractBinder {
#Override
protected void configure() {
// Singleton bindings.
bindAsContract(LocationDao.class).in(Singleton.class);
// Singleton instance bindings.
bind(new FakeLocationDao()).to(LocationDao.class);
}
}
#Test
public void basicTest() {
LocationListV1 actualResponse = /**/
/**/target(LocationResourceV1.PathFields.PATH_ROOT)
/* */.path(LocationResourceV1.PathFields.SUBPATH_LIST)
/* */.request(V1_JSON)
/* */.header(HEADER_API_KEY, "abcdefg")
/* */.get(LocationListV1.class);
assertEquals(10, actualResponse.getLocations().size());
}
#Override
protected Application configure() {
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
ResourceConfig rc = new ResourceConfig();
rc.registerClasses(LocationResourceV1.class, JsonMessageBodyHandlerV1.class);
rc.register(new LocationResourceHK2Binder());
return rc;
}
}
(Pulling from this example.)
The resource it's testing...
public class LocationResourceV1 implements ILocationResourceV1 {
...
#Inject
private LocationDao daoLoc;
private final LocationTranslator translator = new LocationTranslator();
#Override
public LocationListV1 listV1(String apiKey) {
return translator.translate(daoLoc.query(LocationFilters.SELECT_ALL));
}
...
#VisibleForTesting
public void setLocationDao(LocationDao dao) {
this.daoLoc = dao;
}
}
(Note that the web service annotations such as #GET are in the interface.)
Generates this fail trace
org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException:
MessageBodyReader not found for media
type=application/vnd.com.company-v1+json, type=class
com.company.rest.v1.resources.location.json.LocationListV1,
genericType=class
com.company.rest.v1.resources.location.json.LocationListV1.
at
org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(ReaderInterceptorExecutor.java:207)
at
org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:139)
at
org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1109)
at
org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:851)
at
org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:785)
at
org.glassfish.jersey.client.InboundJaxrsResponse.readEntity(InboundJaxrsResponse.java:96)
at
org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:761)
at
org.glassfish.jersey.client.JerseyInvocation.access$500(JerseyInvocation.java:90)
at
org.glassfish.jersey.client.JerseyInvocation$2.call(JerseyInvocation.java:671)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at
org.glassfish.jersey.internal.Errors.process(Errors.java:297) at
org.glassfish.jersey.internal.Errors.process(Errors.java:228) at
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:422)
at
org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:667)
at
org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:396)
at
org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:296)
at
com.company.rest.resources.location.TestLocationResource.basicTest(TestLocationResource.java:47)
[...]
... with this console output
[...]
INFO: [HttpServer] Started.
Oct 29, 2013 4:26:16 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * LoggingFilter - Request received on thread main
1 > GET http://localhost:9998/location/list
1 > Accept: application/vnd.com.company-v1+json
1 > X-ApiKey: abcdefg
Oct 29, 2013 3:30:21 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: HK2 service reification failed for [com.company.persistence.dao.intf.LocationDao] with an exception:
MultiException stack 1 of 2
java.lang.NoSuchMethodException: Could not find a suitable constructor in com.company.persistence.dao.intf.LocationDao class.
at org.glassfish.jersey.internal.inject.JerseyClassAnalyzer.getConstructor(JerseyClassAnalyzer.java:189)
at org.jvnet.hk2.internal.Utilities.getConstructor(Utilities.java:159)
at org.jvnet.hk2.internal.ClazzCreator.initialize(ClazzCreator.java:125)
at org.jvnet.hk2.internal.ClazzCreator.initialize(ClazzCreator.java:176)
at org.jvnet.hk2.internal.SystemDescriptor.internalReify(SystemDescriptor.java:649)
at org.jvnet.hk2.internal.SystemDescriptor.reify(SystemDescriptor.java:604)
at org.jvnet.hk2.internal.ServiceLocatorImpl.reifyDescriptor(ServiceLocatorImpl.java:396)
[...]
MultiException stack 2 of 2
java.lang.IllegalArgumentException: Errors were discovered while reifying SystemDescriptor(
implementation=com.company.persistence.dao.intf.LocationDao
contracts={com.company.persistence.dao.intf.LocationDao}
scope=org.glassfish.jersey.process.internal.RequestScoped
qualifiers={}
descriptorType=CLASS
descriptorVisibility=NORMAL
metadata=
rank=0
loader=org.glassfish.hk2.utilities.binding.AbstractBinder$2#568bf3ec
proxiable=null
proxyForSameScope=null
analysisName=null
id=143
locatorId=0
identityHashCode=2117810007
reified=false)
at org.jvnet.hk2.internal.SystemDescriptor.reify(SystemDescriptor.java:615)
at org.jvnet.hk2.internal.ServiceLocatorImpl.reifyDescriptor(ServiceLocatorImpl.java:396)
at org.jvnet.hk2.internal.ServiceLocatorImpl.narrow(ServiceLocatorImpl.java:1916)
at org.jvnet.hk2.internal.ServiceLocatorImpl.access$700(ServiceLocatorImpl.java:113)
at org.jvnet.hk2.internal.ServiceLocatorImpl$6.compute(ServiceLocatorImpl.java:993)
at org.jvnet.hk2.internal.ServiceLocatorImpl$6.compute(ServiceLocatorImpl.java:988)
[...]
[...]
(Above is repeated 4 times)
[...]
[...]
Followed by this, implying that there was a successful response
Oct 29, 2013 3:30:22 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 2 * LoggingFilter - Response received on thread main
2 < 200
2 < Date: Tue, 29 Oct 2013 22:30:21 GMT
2 < Content-Length: 16
2 < Content-Type: application/vnd.com.company-v1+json
{"locations":[]}
Oct 29, 2013 3:30:22 PM org.glassfish.jersey.test.grizzly.GrizzlyTestContainerFactory$GrizzlyTestContainer stop
INFO: Stopping GrizzlyTestContainer...
Oct 29, 2013 3:30:22 PM org.glassfish.grizzly.http.server.NetworkListener stop
INFO: Stopped listener bound to [localhost:9998]
Anybody have any idea what I am doing wrong?
The first stack-trace comes from your client because you didn't register your message-body provider there (and hence it cannot be found). JerseyTest#configure method is supposed to be used to configure only the server-side. There is another method called JerseyTest#configureClient intended to be used on the client-side. You need to override both methods if you want to use a custom provider.
The second stack-trace comes from your LocationResourceHK2Binder. By
bindAsContract(LocationDao.class).in(Singleton.class);
you're telling HK2 that LocationDao class should be instantiated as a singleton and HK2 would then inject it to LocationDao types. You may want to change your binder to use something like:
bind(new FakeLocationDao()).to(LocationDao.class);
For more information on this topic, refer to Custom Injection and Lifecycle Management.

rmi java.lang.ClassNotFoundException: RMIServerImpl_Stub

when i start rmiserver implementation class it displays this error message
Remote exception: java.rmi.ServerException: RemoteException occurred in server t
hread; nested exception is:
java.rmi.UnmarshalException: error unmarshalling arguments; nested excep
tion is:
java.lang.ClassNotFoundException: RMIServerImpl_Stub
commands ran
start rmiregistry
start java -Djava.security.policy=policyfile RMIServerImpl
what can i do to resolve this. Please help
This is my rmi server code
import java.rmi.*;
import java.rmi.server.*;
import java.rmi.registry.*;
public class RMIServerImpl extends UnicastRemoteObject
implements RMIServer {
RMIServerImpl() throws RemoteException {
super();
}
public static void main(String args[]) {
try {
System.setSecurityManager(new RMISecurityManager());
RMIServerImpl Server = new RMIServerImpl();
Naming.rebind("SAMPLE-SERVER", Server);
System.out.println("Server waiting.....");
} catch (java.net.MalformedURLException mue) {
System.out.println("Malformed URL: " + mue.toString());
} catch (RemoteException re) {
System.out.println("Remote exception: " + re.toString());
}
}
}
Sounds like you didn't run the rmic compiler to generate stubs and skeletons.
It's been so long since I've done raw RMI by hand that I don't know if that step is still required. But it was the last time I did RMI.
If you did run rmic, then I'd guess that you didn't package the stub and skeleton properly with the server and client sides. If you can find those .class files, check your packaging and deployment.