Kafka testcontainer not running - apache-kafka

I am trying to setup an integration test env for debezium integration (following the instructions in this example) but the test container (default image: confluentinc/cp-kafka:5.2.1) doesn't start but throws an exception.
I am using below mentioned code to create a KafkaContainer bean
#Bean
public KafkaContainer kafkaContainer() {
if (kafkaContainer == null) {
kafkaContainer = new KafkaContainer()
.withNetwork(network())
.withExternalZookeeper("172.17.0.2:2181");
kafkaContainer.start();
}
}
return kafkaContainer;
}
it throws following exception.
***************************
APPLICATION FAILED TO START
***************************
Description:
An attempt was made to call a method that does not exist. The attempt was made from the following location:
org.testcontainers.containers.KafkaContainer.getBootstrapServers(KafkaContainer.java:91)
The following method did not exist:
org/testcontainers/containers/KafkaContainer.getHost()Ljava/lang/String;
The method's class, org.testcontainers.containers.KafkaContainer, is available from the following locations:
jar:file:/home/shubham/.m2/repository/org/testcontainers/kafka/1.14.3/kafka-1.14.3.jar!/org/testcontainers/containers/KafkaContainer.class
The class hierarchy was loaded from the following locations:
org.testcontainers.containers.KafkaContainer: file:/home/shubham/.m2/repository/org/testcontainers/kafka/1.14.3/kafka-1.14.3.jar
org.testcontainers.containers.GenericContainer: file:/home/shubham/.m2/repository/org/testcontainers/testcontainers/1.12.5/testcontainers-1.12.5.jar
org.testcontainers.containers.FailureDetectingExternalResource: file:/home/shubham/.m2/repository/org/testcontainers/testcontainers/1.12.5/testcontainers-1.12.5.jar
Action:
Correct the classpath of your application so that it contains a single, compatible version of org.testcontainers.containers.KafkaContainer
2020-09-10 01:09:49.937 ERROR 72507 --- [ main] o.s.test.context.TestContextManager : Caught exception while allowing TestExecutionListener

Used an older version of org.testcontainers in maven dependencies and it worked. Thanks!

Related

How do i disable elastic search and fall back to local filesystem storage in hibernate search 6

I am using Hibernate 6 with Amazons opensearch server in production. When i'm testing locally i don't want to use the opensearch server, instead i want to use local-filesystem to store the index files.
However i can't hibernate search to use local-filesystem even when i explicitly set it with jpaProperties.put("hibernate.search.backend.directory.type", "local-filesystem"); while at the same time not setting the property hibernate.search.backend.uris. Before all the hibernate search properties can be programmatically set i get the following error on startup:
default backend:
failures:
- HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused: no further information
Request: GET with parameters {}
I have the following maven dependencies:
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm</artifactId>
<version>6.1.5.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-elasticsearch-aws</artifactId>
<version>6.1.5.Final</version>
</dependency>
The following sets hibernate search and the lucene file destination path if using lucene only:
private Properties initializeJpaProperties() {
String luceneAbsoluteFilePath = useExternalElasticSearchServer ? null : setDefaultLuceneIndexBaseFilePath(); //Alleen relevant wanneer je geen elastic search gebruikt.
Properties jpaProperties = new Properties();
//---------------------------open search aws related-----------------------------------
if(useExternalElasticSearchServer) {
jpaProperties.put("hibernate.search.backend.aws.credentials.type", "static");
jpaProperties.put("hibernate.search.backend.aws.credentials.access_key_id", awsId);
jpaProperties.put("hibernate.search.backend.aws.credentials.secret_access_key", awsKey);
jpaProperties.put("hibernate.search.backend.aws.region", openSearchAwsInstanceRegion);
jpaProperties.put("hibernate.search.backend.aws.signing.enabled", true);
jpaProperties.put("hibernate.search.backend.uris", elasticSearchHostAddress);
}
//--------------------------------------------------------------------------------------------
jpaProperties.put("hibernate.search.automatic_indexing.synchronization.strategy", indexSynchronizationStrategy);
jpaProperties.put("hibernate.search.backend.request_timeout", requestTimeout);
jpaProperties.put("hibernate.search.backend.connection_timeout", elasticSearchConnectionTimeout);
jpaProperties.put("hibernate.search.backend.read_timeout", readTimeout);
jpaProperties.put("hibernate.search.backend.max_connections", maximumElasticSearchConnections);
jpaProperties.put("hibernate.search.backend.max_connections_per_route", maximumElasticSearchConnectionsPerRout);
// jpaProperties.put("hibernate.search.schema_management.strategy", schemaManagementStrategy);
jpaProperties.put("hibernate.search.backend.thread_pool.size", maxPoolSize);
jpaProperties.put("hibernate.search.backend.analysis.configurer", "class:config.EnhancedLuceneAnalysisConfig");
// jpaProperties.put("hibernate.search.backend.username", hibernateSearchUsername); //Alleen voor wanneer je elastic search lokaal draait
// jpaProperties.put("hibernate.search.backend.password", hibernateSearchPassword);
if(!useExternalElasticSearchServer) {
jpaProperties.put("hibernate.search.backend.directory.type", "local-filesystem");
jpaProperties.put("hibernate.search.backend.directory.root", luceneAbsoluteFilePath);
jpaProperties.put("hibernate.search.backend.lucene_version", "LUCENE_CURRENT");
jpaProperties.put("hibernate.search.backend.io.writer.infostream", true);
}
jpaProperties.put("hibernate.jdbc.batch_size", defaultBatchSize);
jpaProperties.put("spring.jpa.properties.hibernate.jdbc.batch_size", defaultBatchSize);
jpaProperties.put("hibernate.order_inserts", "true");
jpaProperties.put("hibernate.order_updates", "true");
jpaProperties.put("hibernate.batch_versioned_data", "true");
// log.info("The directory of the lucene index files is set to {}", luceneAbsoluteFilePath);
return jpaProperties;
}
private String setDefaultLuceneIndexBaseFilePath() {
String luceneRelativeFilePath = ServiceUtil.getOperatingSystemCompatiblePath("/data/lucene/indexes/default");
StringBuilder luceneAbsoluteFilePath = new StringBuilder(System.getProperty("user.dir"));
if(!StringUtils.isEmpty(luceneIndexBase)) {
luceneRelativeFilePath = ServiceUtil.getOperatingSystemCompatiblePath(luceneIndexBase);
String OSPathSeparator = ServiceUtil.getOperatingSystemFileSeparator();
if(luceneRelativeFilePath.toCharArray()[0] != OSPathSeparator.charAt(0))
luceneAbsoluteFilePath.append(OSPathSeparator);
luceneAbsoluteFilePath.append(luceneRelativeFilePath);
validateUserDefinedAbsolutePath(luceneRelativeFilePath);
}
else{
log.warn("No relative path value for property 'lucene-index-base' was found in application.properties, will use the default path '{}' instead.", luceneRelativeFilePath);
luceneAbsoluteFilePath.append(luceneRelativeFilePath);
}
return luceneAbsoluteFilePath.toString();
}
I know that i can disable hibernate search completely with hibernate.search.enabled set to false but i don't want that. I want to be able to switch to lucene only without having to remove all the ElasticSearch/OpenSearch dependencies from my POM.xml beforehand. How do i do this?
EDIT: I just found out that you can set the backend type with hibernate.search.backend.type. This setting is set to the value elasticsearch by default. I should also be able to set this value to lucene but when i do that i get the following error:
default backend:
failures:
- HSEARCH000501: Invalid value for configuration property 'hibernate.search.backend.type': 'lucene'. HSEARCH000579: Unable to resolve bean reference to type 'org.hibernate.search.engine.backend.spi.BackendFactory' and name 'lucene'. Failed to resolve bean from Hibernate Search's internal registry with exception: HSEARCH000578: No beans defined for type 'org.hibernate.search.engine.backend.spi.BackendFactory' and name 'lucene' in Hibernate Search's internal registry. Failed to resolve bean from bean manager with exception: HSEARCH000590: No configured bean manager. Failed to resolve bean from bean manager with exception: HSEARCH000591: Unable to resolve 'lucene' to a class extending 'org.hibernate.search.engine.backend.spi.BackendFactory': HSEARCH000530: Unable to load class 'lucene': Could not load requested class : lucene Failed to resolve bean using reflection with exception: HSEARCH000591: Unable to resolve 'lucene' to a class extending
'org.hibernate.search.engine.backend.spi.BackendFactory': HSEARCH000530: Unable to load class 'lucene': Could not load requested class : lucene
EDIT 2:
I tried the setting the following settings with no success as well.
jpaProperties.put("hibernate.search.default_backend", "lucene");
jpaProperties.put("hibernate.search.backends.lucene.type", "lucene");
jpaProperties.put("hibernate.search.backend.type", "lucene");
You need to set the backend type explicitly according to the environment:
jpaProperties.put("hibernate.search.backend.type", isProductionEnvironment() ? "elasticsearch" : "lucene");
And you also need to have the Lucene backend in your classpath:
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-lucene</artifactId>
<version>${my-hibernate-search-version}</version>
</dependency>

Spring Cloud Gateway 500 when an instance is down

I have a Spring Cloud Gateway (eureka client) app that uses Spring Cloud Load Balancer (Spring Cloud version: Hoxton.SR6) and I have an instance of a spring boot app (spring boot 2.3 with enabled graceful shutdown, (eureka client).
When I shutdown a spring boot service and perform a request through the gateway then the gateway throws 500 error (connection refused), instead of 503. 503 appears after a 1-2 minutes.
Can anyone clarify if it is an expected behavior?
It seems that the problem comes from eureka-client (1.9.21 version in my case)
AtomicReference<Applications> localRegionApps isn't frequently updated
Thanks!
UPDATE:
I decided to check deeper this 500 error. The result is that my system (ubuntu) gives this error if the port is not used:
curl -v localhost:9722
Rebuilt URL to: localhost:9722/
Trying 127.0.0.1...
TCP_NODELAY set
connect to 127.0.0.1 port 9722 failed: Connection refused
Failed to connect to localhost port 9722: Connection refused
Closing connection 0
So I put in my application.yml:
spring:
cloud:
gateway:
routes:
- id: my_route
uri: http://localhost:9722/
Then when my request is routed to my_route and none of apps uses 9722 then I get an error:
io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:9722
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/internal/mail/internal/health-check" [ExceptionHandlingWebHandler]
Stack trace:
Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
at io.netty.channel.unix.Errors.throwConnectException(Errors.java:124)
at io.netty.channel.unix.Socket.finishConnect(Socket.java:251)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:672)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:649)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:465)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
It seems to be an unexpected exception, since it isn't possible to handle it using a circuit breaker or any gateway filter.
Is it possible to handle this error correctly? I would like to return 503 in this case
One of the easiest ways to map particular exception to particular HTTP status code is providing a custom bean of type org.springframework.boot.web.reactive.error.ErrorAttributes. Here is an example:
#Bean
public ErrorAttributes errorAttributes() {
return new CustomErrorAttributes(httpStatusExceptionTypeMapper);
}
public class CustomErrorAttributes extends DefaultErrorAttributes {
#Override
public Map<String, Object> getErrorAttributes(ServerRequest request, ErrorAttributeOptions options) {
Map<String, Object> attributes = super.getErrorAttributes(request, options);
Throwable error = getError(request);
MergedAnnotation<ResponseStatus> responseStatusAnnotation = MergedAnnotations
.from(error.getClass(), MergedAnnotations.SearchStrategy.TYPE_HIERARCHY).get(ResponseStatus.class);
HttpStatus errorStatus = determineHttpStatus(error, responseStatusAnnotation);
attributes.put("status", errorStatus.value());
return attributes;
}
private HttpStatus determineHttpStatus(Throwable error, MergedAnnotation<ResponseStatus> responseStatusAnnotation) {
if (error instanceof ResponseStatusException) {
return ((ResponseStatusException) error).getStatus();
}
return responseStatusAnnotation.getValue("code", HttpStatus.class).orElseGet(() -> {
if (error instanceof java.net.ConnectException) {
return HttpStatus.SERVICE_UNAVAILABLE;
}
return HttpStatus.INTERNAL_SERVER_ERROR;
}
}
}
hava a try to define your custom ErrorWebExceptionHandler.
see:
org.springframework.boot.web.reactive.error.ErrorWebExceptionHandler
org.springframework.boot.autoconfigure.web.reactive.error.DefaultErrorWebExceptionHandler
You should use Cloud Circuit Breaker.
For that:
Declare corresponding starter in your pom:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-circuitbreaker-reactor-resilience4j</artifactId>
</dependency>
Declare circuit breaker in application.yaml
spring:
cloud:
gateway:
routes:
- id: my_route
uri: http://localhost:9722/
filters:
- name: CircuitBreaker
args:
name: myCircuitBreaker
fallbackUri: forward:/inCaseOfFailureUseThis
Declare the endpoint which will be called in the case of failure (a connection error, for example)
#RequestMapping("/inCaseOfFailureUseThis")
public Mono<ResponseEntity<String>> inCaseOfFailureUseThis() {
return Mono.just(ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE).body("body for service failure case"));
}

Spring Cloud Config Zookeeper Exception

I am trying to use Spring cloud config with zookeeper for configuration management.
My build.gradle looks like:
dependencies {
compile('org.springframework.boot:spring-boot-starter-actuator')
compile('org.springframework.boot:spring-boot-starter-web')
compile('org.springframework.cloud:spring-cloud-starter-zookeeper-config')
compile('org.springframework.cloud:spring-cloud-config-server')
}
Here is my bootstrap.properties file:
spring.application.name = myapp
spring.cloud.zookeeper.connectString: localhost:2181
This is the application class:
#SpringBootApplication
#EnableConfigServer
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
But the application fails to launch and gives an exception:
Error starting Tomcat context.
Exception: org.springframework.beans.factory.BeanCreationException.
Message: Error creating bean with name 'servletEndpointRegistrar'
defined in class path resource
[org/springframework/boot/actuate/autoconfigure/endpoint/web/ServletEndpointManagementContextConfiguration.class]
: Bean instantiation via factory method failed;
nested exception is org.springframework.beans.BeanInstantiationException
: Failed to instantiate [org.springframework.boot.actuate.endpoint.web.ServletEndpointRegistrar]
: Factory method 'servletEndpointRegistrar' threw exception;
nested exception is org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'healthEndpoint'
defined in class path
resource [org/springframework/boot/actuate/autoconfigure/health/HealthEndpointConfiguration.class]
: Bean instantiation via factory method failed;
nested exception is org.springframework.beans.BeanInstantiationException
: Failed to instantiate [org.springframework.boot.actuate.health.HealthEndpoint]
: Factory method 'healthEndpoint' threw exception;
nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException
: Error creating bean with name 'configServerHealthIndicator'
defined in class path resource
[org/springframework/cloud/config/server/config/EnvironmentRepositoryConfiguration.class]
: Unsatisfied dependency expressed through
method 'configServerHealthIndicator' parameter 0;
nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException
: Error creating bean with name 'org.springframework.cloud.config.server.config.CompositeConfiguration'
: Unsatisfied dependency expressed through method 'setEnvironmentRepos' parameter 0;
nested exception is org.springframework.beans.factory.BeanCreationException
: Error creating bean with name
'defaultEnvironmentRepository' defined in class path
resource [org/springframework/cloud/config/server/config/DefaultRepositoryConfiguration.class]
: Bean instantiation via factory method failed;
nested exception is org.springframework.beans.BeanInstantiationException
: Failed to instantiate
[org.springframework.cloud.config.server.environment.MultipleJGitEnvironmentRepository]
: Factory method 'defaultEnvironmentRepository' threw exception;
nested exception is java.lang.NullPointerException
What could be the reason behind this? Seems like it is searching for git repository even after specifying zookeeper.

Getting error in installing mongeez plugin

I'm trying to install mongeez plugin and I get the following error,
I have included ,
plugins{
..
compile ':mongeez:0.2.3'
..
}
in the BuildConfig.groovy
Error creating bean with name 'grails.mongeez.MongeezController': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongeez': Cannot resolve reference to bean 'mongo' while setting bean property 'mongo'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'mongo' is defined
.....
.....
....
....
.....
.....
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongeez': Cannot resolve reference to bean 'mongo' while setting bean property 'mongo'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'mongo' is defined
I'm able to install the plugin by adding mongeez as a dependency instead of a plugin.
BuildConfig.groovy
dependencies {
......
.....
compile 'org.mongeez:mongeez:0.9.6'
.....
......
}
Contents of migrations folder(Change logs)
I have grails-app/migrations/test.js
grails-app/migrations/mongeez.xml
This is my mongeez.xml
<changeFiles>
<file path="test.js"/>
</changeFiles>
This is my test.js
db.movie.insert({
"name":"tutorials point"
});
I have written a groovy script as follows,
import com.mongodb.Mongo
import org.springframework.core.io.ClassPathResource
import org.mongeez.Mongeez
includeTargets << grailsScript("_GrailsInit")
target(updateMongo: "Update the mongo DB!") {
println "The script is about to run"
def host = 'localhost'
def port = 27017
def databaseName = 'reporting'
Mongeez mongeez = new Mongeez()
mongeez.setFile(new ClassPathResource("/migrations/mongeez.xml"))
mongeez.setMongo(new Mongo(host, port))
mongeez.setDbName(databaseName)
mongeez.process()
println "The script just ran"
}
setDefaultTarget(updateMongo)
When I run the above script both the print statements are getting executed.
A mongeez collection also got created in my reporting db but the contents of test.js(movie collection) is not reflected in my mongodb , i.e movie collection is not getting created.
Please suggest me if I'm missing something.
Got mongeez plugin working by replacing ClassPathResource with FileSystemResource.
Mongeez mongeez = new Mongeez()
mongeez.setFile(new FileSystemResource(path)) // Give the path to your mongeez.xml
mongeez.setMongo(new Mongo(properties.host, properties.port))
mongeez.setDbName(databaseName)
mongeez.process()

rmi java.lang.ClassNotFoundException: RMIServerImpl_Stub

when i start rmiserver implementation class it displays this error message
Remote exception: java.rmi.ServerException: RemoteException occurred in server t
hread; nested exception is:
java.rmi.UnmarshalException: error unmarshalling arguments; nested excep
tion is:
java.lang.ClassNotFoundException: RMIServerImpl_Stub
commands ran
start rmiregistry
start java -Djava.security.policy=policyfile RMIServerImpl
what can i do to resolve this. Please help
This is my rmi server code
import java.rmi.*;
import java.rmi.server.*;
import java.rmi.registry.*;
public class RMIServerImpl extends UnicastRemoteObject
implements RMIServer {
RMIServerImpl() throws RemoteException {
super();
}
public static void main(String args[]) {
try {
System.setSecurityManager(new RMISecurityManager());
RMIServerImpl Server = new RMIServerImpl();
Naming.rebind("SAMPLE-SERVER", Server);
System.out.println("Server waiting.....");
} catch (java.net.MalformedURLException mue) {
System.out.println("Malformed URL: " + mue.toString());
} catch (RemoteException re) {
System.out.println("Remote exception: " + re.toString());
}
}
}
Sounds like you didn't run the rmic compiler to generate stubs and skeletons.
It's been so long since I've done raw RMI by hand that I don't know if that step is still required. But it was the last time I did RMI.
If you did run rmic, then I'd guess that you didn't package the stub and skeleton properly with the server and client sides. If you can find those .class files, check your packaging and deployment.