testcontainers: can't initialize DockerComposeContainer - docker-compose

I'm using testcontainers (https://www.testcontainers.org) to perform integration tests. The test case requires an Oracle database and an Eclipse Microprofile compliant platform which, in my case, is Wildfly 20. I have the following docker-compose.yaml file:
version: '3.7'
...
services:
oracle:
image: oracleinanutshell/oracle-xe-11g:latest
....
ports:
- 49161:1521
- 5500:5500
environment:
...
volumes:
...
customers:
image: customers:1.0-SNAPSHOT
...
depends_on:
- oracle
ports:
- 8080:8080
- 9990:9990
environment:
...
This docker-compose file is okay and works as expected when ran with the docker-compose command or with the docker-compose-maven-plugin.
In order to use the same docker-compose.yaml file for integration tests, I'm using the following code:
public class CustomersIT
{
#ClassRule
public static DockerComposeContainer composer = DockerCompose.newContainer()
.withLogConsumer(DockerCompose.DATABASE, new Slf4jLogConsumer(log))
.withLogConsumer(DockerCompose.SERVICE, new Slf4jLogConsumer(log));
private static URI baseUri;
private static URI finalUri;
private static String id;
private static Map<String, String> props = new HashMap<>();
#BeforeAll
public static void beforeAll()
{
baseUri = UriBuilder.fromPath("customers")
.scheme("http")
.host(composer.getServiceHost(DockerCompose.SERVICE, DockerCompose.SERVICE_PORT))
.port(composer.getServicePort(DockerCompose.SERVICE, DockerCompose.SERVICE_PORT))
.build();
finalUri = UriBuilder.fromUri(baseUri).path("test").path("customers").build();
}
....
}
The code above uses the class DockerCompose which is a wrapper around DockerComposeContainer, as shown below:
public class DockerCompose
{
public static final String DATABASE = "oracle";
public static final String SERVICE = "customers";
public static final int DATABASE_PORT = 1521;
public static final int SERVICE_PORT = 8080;
private final DockerComposeContainer dcc =
new DockerComposeContainer(new File("../platform/src/main/resources/docker-compose.yaml"))
.withExposedService(DATABASE, DATABASE_PORT, Wait.forLogMessage(".*WFLYSRV0051.*", 1))
.withExposedService(SERVICE, SERVICE_PORT);
private DockerCompose()
{
super();
}
public static DockerComposeContainer newContainer()
{
return new DockerCompose().dcc;
}
}
Trying to run the integration test raises the following exception:
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running ...tests.CustomersIT
2021-02-16 19:16:06 DEBUG TestcontainersConfiguration:178 - Testcontainers configuration overrides will be loaded from file:/home/seymour/.testcontainers.properties
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.149 s <<< FAILURE! - in ....tests.CustomersIT
[ERROR] ....tests.CustomersIT Time elapsed: 0.148 s <<< ERROR!
java.lang.ExceptionInInitializerError
at ...tests.CustomersIT.<clinit>(CustomersIT.java:26)
No any additional information even in DEBUG mode.The line #26 referenced above is the following one:
public static DockerComposeContainer composer = DockerCompose.newContainer()
.withLogConsumer(DockerCompose.DATABASE, new Slf4jLogConsumer(log))
.withLogConsumer(DockerCompose.SERVICE, new Slf4jLogConsumer(log));
so the exception is raised here:
private final DockerComposeContainer dcc =
new DockerComposeContainer(new File("../platform/src/main/resources/docker-compose.yaml"))
.withExposedService(DATABASE, DATABASE_PORT, Wait.forLogMessage(".*WFLYSRV0051.*", 1))
.withExposedService(SERVICE, SERVICE_PORT);
Could anyone please let me know what am I doing wrong here ?
Many thanks in advance.
Seymour

It appears that the docker-compose module in testcontainers doesn't support things like:
networks:
of-network:
ipv4_address: ...
or even:
container_name: ...
Removing these statements from the yaml file will solve the issue. But of course, if these statements are there, this is because they are required and removing them isn't probably an option.
So I will conclude by saying that, since the docker-compose module doesn't fully support the official syntax, it is not yet mature enough.

Related

EF Core InMemory DB does not apply configurations from assembly and returns no data

I'm trying to test my app with an InMemory DB so I can run Postman tests against it.
I have a docker-compose that successfully starts it:
version: '3.9'
services:
api:
image: ${DOCKER_REGISTRY-}api-test
build:
context: ../
dockerfile: API/Dockerfile
ports:
- 80:80
- 443:443
environment:
- ASPNETCORE_ENVIRONMENT=Test
The test env is setup for the InMemory DB:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"UseOnlyInMemoryDatabase": true
}
I successfully configure the DB service in here:
namespace Infrastructure
{
public static class Dependencies
{
public static void ConfigureServices(IConfiguration configuration, IServiceCollection services)
{
var useOnlyInMemoryDatabase = false;
if (configuration["UseOnlyInMemoryDatabase"] != null)
{
useOnlyInMemoryDatabase = bool.Parse(configuration["UseOnlyInMemoryDatabase"]);
}
if (useOnlyInMemoryDatabase)
{
services.AddDbContext<BookDesinerContext>(c =>
c.UseInMemoryDatabase("BookDesignerDB"));
}
else
{
...
}
}
}
}
I get the successful log like this:
info: API[0]
PublicApi App created...
info: API[0]
Seeding Database...
Starting Seed Category
Ended Seeding and applying
warn: Microsoft.EntityFrameworkCore.Model.Validation[10620]
The property 'GameCell.Settings' is a collection or enumeration type with a value converter but with no value comparer. Set a value comparer to ensure the collection/enumeration elements are compared correctly.
info: Microsoft.EntityFrameworkCore.Infrastructure[10403]
Entity Framework Core 6.0.8 initialized 'BookDesinerContext' using provider 'Microsoft.EntityFrameworkCore.InMemory:6.0.7' with options: StoreName=BookDesignerDB
info: API[0]
LAUNCHING API
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://[::]:80
Notice the log from "Ended Seeding and Applying", which is set in OnModelCreating()
// Seed
builder.ApplyConfigurationsFromAssembly(Assembly.GetExecutingAssembly());
Console.WriteLine("Ended Seeding and applying");
All my configs seed data like this:
namespace Infrastructure.Data.Seeding
{
public class TagConfig : IEntityTypeConfiguration<Tag>
{
public void Configure(EntityTypeBuilder<Tag> builder)
{
builder.ToTable("Tag");
builder.Property(t => t.Value).IsRequired();
builder.HasData(
new Tag
{
TagId = 1,
Key = "Held",
Value = "Borja"
},
new Tag
{
TagId = 2,
Key = "Genre",
Value = "Pirat"
}
);
}
}
}
But when I access the collection under http://localhost/api/Tags I get []. I can use a REST client to create new resources and read them, but I want my seed data to be applied. Why does the config not apply the values from builder.HasData()?
For the seeding to actually happen, you need a call to:
context.Database.EnsureCreated();
Try:
services.GetRequiredService<BookDesinerContext>().Database.EnsureCreated();

AngularDart: How to configure routerProviders / routerProvidersHash for development and production environments?

There is one SO question about same problem. But I can't find production-ready code example how to use routerProviders / routerProvidersHash in real application.
As I understand we need to define two injectors and use one of them depending on compile time environment variable, like shown below.
// File: web/main.dart
// >>> Have to use 2 injectors:
#GenerateInjector([
routerProvidersHash,
ClassProvider(Client, useClass: BrowserClient),
])
final InjectorFactory injectorDev = self.injectorDev$Injector;
#GenerateInjector([
routerProviders,
ClassProvider(Client, useClass: BrowserClient),
])
final InjectorFactory injectorProd = self.injectorProd$Injector;
// <<<
void main() {
final env = ServerEnvironment();
if (env.isProduction) {
runApp(ng.AppComponentNgFactory, createInjector: injectorProd);
} else {
runApp(ng.AppComponentNgFactory, createInjector: injectorDev);
}
}
// File: lib/server_environment.dart
enum ServerEnvironmentId { development, production }
class ServerEnvironment {
ServerEnvironmentId id;
static final ServerEnvironment _instance = ServerEnvironment._internal();
factory ServerEnvironment() => _instance;
ServerEnvironment._internal() {
const compileTimeEnvironment = String.fromEnvironment('MC_ENVIRONMENT', defaultValue: 'development');
if (compileTimeEnvironment != 'development') {
id = ServerEnvironmentId.production;
} else {
id = ServerEnvironmentId.development;
}
}
bool get isProduction {
return id == ServerEnvironmentId.production;
}
}
File: build.production.yaml
targets:
$default:
builders:
build_web_compilers|entrypoint:
generate_for:
- web/main.dart
options:
compiler: dart2js
# List any dart2js specific args here, or omit it.
dart2js_args:
- -DMC_ENVIRONMENT=production
- --fast-startup
- --minify
- --trust-primitives
# Build execution
pub run build_runner build --config production --release -o web:build
Is the assumption of having two injectors is the right way to do?
Thank you in advance!
What I would do is make a different main.dart file for your different injector setup. You shouldn't have too much stuff in main.dart it should just serve as a mechanism to start your app. The branching should occur in build.production.yaml as specifying a different main for production (i.e. web/main_production.dart) and this file is the one with the non-hash route provider. This would remove the need for a "ServerEnvironment" and a if/else with a potentially confusing double injector setup in one file.
// File: web/main.dart
#GenerateInjector([
routerProvidersHash,
ClassProvider(Client, useClass: BrowserClient),
])
final InjectorFactory injector = self.injector$Injector;
void main() {
runApp(ng.AppComponentNgFactory, createInjector: injector);
}
and
// File: web/main_production.dart
#GenerateInjector([
routerProviders,
ClassProvider(Client, useClass: BrowserClient),
])
final InjectorFactory injector = self.injector$Injector;
void main() {
runApp(ng.AppComponentNgFactory, createInjector: injector);
}
with
File: build.production.yaml
targets:
$default:
builders:
build_web_compilers|entrypoint:
generate_for:
- web/main_production.dart
options:
compiler: dart2js
# List any dart2js specific args here, or omit it.
dart2js_args:
- --fast-startup
- --minify
- --trust-primitives
Ran as
# Build execution
pub run build_runner build --config production --release -o web:build

Kafka - Redirect messages from "Topic A" to "Topic B" based on header value

I would like to redirect kafka messages from a topic called "all-topic" to a topic named "headervalue-topic" where headervalue is the value of a custom header each message has.
At the moment i'm using a custom console application that consumes messages and redirects the messages to the correct topic, but it only process 16 messages per second.
Both kafka and zookeeper are running in a docker container, configured as such :
zookeeper:
image: "wurstmeister/zookeeper:latest"
restart: always
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVER_ID: 1
kafka:
hostname: kafka
image: "wurstmeister/kafka:latest"
restart: always
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
What is the best and fastest way to achieve my goal?
I do know about the existence of Kafka Streams, but i'm not familiar with Java so in case you'd like to suggest Kafka Streams a little example would be appreciated :)
Many Thanks!
Here is the solution i came up with, using kafka-streams nodejs library :
const {KafkaStreams} = require("kafka-streams");
const {nativeConfig: config} = require("./config.js");
const kafkaStreams = new KafkaStreams(config);
const myConsumerStream = kafkaStreams.getKStream("all-topic");
myConsumerStream
.mapJSONConvenience()
.filter((element) => {
return element.value.type == "Article";
})
.tap((element) => {console.log("Got Article")})
.mapWrapKafkaValue()
.to("Article-topic", 1, "buffer");
myConsumerStream.start();
From what I know you can't access the header directly through DSL.
You can access it through ProcessorContext using stream processor though and here is a little example I came up with:
public class CustomProcessor1 implements Processor<String, String> {
private ProcessorContext context;
#Override
public void init(ProcessorContext processorContext) {
this.context = processorContext;
}
#Override
public void process(String key, String value) {
HashMap<String, String> headers = new HashMap<>();
for (Header header : context.headers()) {
headers.put(header.key(), new String(header.value()));
}
String headerValue = headers.get("certainHeader").replace("\"", "");
if (headerValue.equals("expectedHeaderValue")) {
context.forward(key, value);
}
}
Above is the processor which will forward messages with certainHeader that matches headerValue to downstream process. This processor will be used when creating the streaming topology like below:
public static void main(String[] args) throws Exception {
Properties props = getProperties();
final Topology topology = new Topology()
.addSource("SOURCE", "all.topic")
.addProcessor("CUSTOM_PROCESSOR_1", CustomProcessor1::new, "SOURCE")
.addProcessor("CUSTOM_PROCESSOR_2", CustomProcessor2::new, "SOURCE")
.addSink("SINK1", "headervalue1-topic", "CUSTOM_PROCESSOR_1")
.addSink("SINK2", "headervalue2-topic", "CUSTOM_PROCESSOR_2");

Eureka never unregisters a service

I'm currently facing an issue where Eureka does not unregister a registered service. I've pulled the Eureka server example straight from git hub and made only one change, eureka.enableSelfPreservation = false. My application.yml looks like this:
server:
port: 8761
eureka:
enableSelfPreservation: false
client:
registerWithEureka: false
fetchRegistry: false
server:
waitTimeInMsWhenSyncEmpty: 0
I've read that if 85% of the registered services stop delivering heartbeats within 15 minutes, Eureka assumes the issue is network related and does not de-register the services that are not responding. In my case I have only one service running, so I disabled self-preservation mode. I am abruptly killing the process and Eureka leaves the service registered for what seems like an indefinite amount of time.
My client's application.yml looks like this:
eureka:
instance:
leaseRenewalIntervalInSeconds: 3
client:
healthcheck:
enabled: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
appInfo:
replicate:
interval: 3
initial:
replicate:
time: 3
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
My goal is to create a demo where Eureka quickly detects the service is no longer running and another service that is started can quickly register itself.
As of now, once the eureka client is started, it registers in 3 seconds. It just never un-registers when the service is abruptly terminated. After I kill the service, the Eureka dashboard reads:
EMERGENCY! EUREKA MAY BE INCORRECTLY CLAIMING INSTANCES ARE UP WHEN THEY'RE NOT. RENEWALS ARE LESSER THAN THRESHOLD AND HENCE THE INSTANCES ARE NOT BEING EXPIRED JUST TO BE SAFE.
How can I prevent this behavior?
I realized that self preservation mode was never actually being disabled. It turns out the actual property is
eureka.server.enableSelfPreservation=false
(See DefaultEurekaServerConfig Code), which I haven't found documented anywhere. This resolved my issue.
I made service de-registration work by setting the below values
Eureka server application.yml
eureka:
server:
enableSelfPreservation: false
Service application.yml
eureka:
instance:
leaseRenewalIntervalInSeconds: 1
leaseExpirationDurationInSeconds: 2
The full example is here https://github.com/ExampleDriven/spring-cloud-eureka-example
After struggling a lot, finally I got solution if any service unregistered from Eureka server due to some issue. It will notify to the Admin by extending the HealthCallback of Eureka-Server APIs.
Let Say Service-A register with Eureka. Hence Eureka Client is integrate with Service-A and Implement following Callbacks in Service A.
Service-A [Eureka-Client]
Add following properties in properties files.
#Eureka Configuration
eureka.client.eureka-server-port=8761
eureka.client.register-with-eureka=true
eureka.client.healthcheck.enabled=false
eureka.client.prefer-same-zone-eureka=true
eureka.client.fetchRegistry=true
eureka.client.serviceUrl.defaultZone=${eurekaServerURL1}, ${eurekaServerURL2}
eureka.client.eureka.service-url.defaultZone=${eurekaServerURL1}, ${eurekaServerURL2}
eureka.instance.hostname=${hostname}
eureka.client.lease.duration=30
eureka.instance.lease-renewal-interval-in-seconds=30
eureka.instance.lease-expiration-duration-in-seconds=30
Add following java files.
#Component
public class EurekaHealthCheckHandler implements HealthCheckHandler, ApplicationContextAware, InitializingBean {
static Logger logger = LoggerFactory.getLogger(EurekaHealthCheckHandler.class);
private static final Map<Status, InstanceInfo.InstanceStatus> healthStatuses = new HashMap<Status, InstanceInfo.InstanceStatus>() {{
put(Status.UNKNOWN, InstanceInfo.InstanceStatus.UNKNOWN);
put(Status.OUT_OF_SERVICE, InstanceInfo.InstanceStatus.OUT_OF_SERVICE);
put(Status.DOWN, InstanceInfo.InstanceStatus.DOWN);
put(Status.UP, InstanceInfo.InstanceStatus.UP);
}};
#Autowired
ComunocationService comunocationService ;
private final CompositeHealthIndicator healthIndicator;
private ApplicationContext applicationContext;
public EurekaHealthCheckHandler(HealthAggregator healthAggregator) {
Assert.notNull(healthAggregator, "HealthAggregator must not be null");
this.healthIndicator = new CompositeHealthIndicator(healthAggregator);
Health health = healthIndicator.health();
logger.info(" =========== Testing =========== {}", health.toString() );
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
#Override
public void afterPropertiesSet() throws Exception {
final Map<String, HealthIndicator> healthIndicators = applicationContext.getBeansOfType(HealthIndicator.class);
for (Map.Entry<String, HealthIndicator> entry : healthIndicators.entrySet()) {
logger.info("======"+ entry.getKey() +"============= "+entry.getValue());
healthIndicator.addHealthIndicator(entry.getKey(), entry.getValue());
}
}
#Override
public InstanceInfo.InstanceStatus getStatus(InstanceInfo.InstanceStatus instanceStatus) {
logger.info("============== Custome Eureka Implementation ==================="+ getHealthStatus());
return getHealthStatus();
}
protected InstanceInfo.InstanceStatus getHealthStatus() {
final Status status = healthIndicator.health().getStatus();
return mapToInstanceStatus(status);
}
protected InstanceInfo.InstanceStatus mapToInstanceStatus(Status status) {
logger.info("============== Test Custome Eureka Implementation ==================={}", status);
if(status.equals(InstanceInfo.InstanceStatus.UP)) {
// Send mail after configured times
comunocationService.sendEmail("ServiceName");
}
if(!healthStatuses.containsKey(status)) {
return InstanceInfo.InstanceStatus.UNKNOWN;
}
return healthStatuses.get(status);
}
public void getstatusChangeListner() {
ApplicationInfoManager.StatusChangeListener statusChangeListener = new ApplicationInfoManager.StatusChangeListener() {
#Override
public String getId() {
return "statusChangeListener";
}
#Override
public void notify(StatusChangeEvent statusChangeEvent) {
if (InstanceStatus.DOWN == statusChangeEvent.getStatus() ||
InstanceStatus.DOWN == statusChangeEvent.getPreviousStatus()) {
// log at warn level if DOWN was involved
logger.warn("Saw local status change event {}", statusChangeEvent);
} else {
logger.info("Saw local status change event {}", statusChangeEvent);
}
}
};
}
}
and
#Configuration
public class EurekaHealthCheckHandlerConfiguration {
#Autowired(required = false)
private HealthAggregator healthAggregator = new OrderedHealthAggregator();
#Bean
#ConditionalOnMissingBean
public EurekaHealthCheckHandler eurekaHealthCheckHandler() {
return new EurekaHealthCheckHandler(healthAggregator);
}
}
This is absolutely working and well tested code

* Unrecognized field at: database Did you mean?: - metrics - server - logging - DROPWIZARD

I cannot start my dropwizard application after add database details in my application configuration file (server.yml).
server.yml (app config file)
server:
applicationConnectors:
- type: http
port: 8080
adminConnectors:
- type: http
port: 9001
database:
# the name of your JDBC driver
driverClass: org.postgresql.Driver
# the username
user: dbuser
# the password
password: pw123
# the JDBC URL
url: jdbc:postgresql://localhost/database
# any properties specific to your JDBC driver:
properties:
charSet: UTF-8
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "/* MyService Health Check */ SELECT 1"
# the timeout before a connection validation queries fail
validationQueryTimeout: 3s
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 32
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
# the amount of time to sleep between runs of the idle connection validation, abandoned cleaner and idle pool resizing
evictionInterval: 10s
# the minimum amount of time an connection must sit idle in the pool before it is eligible for eviction
minIdleTime: 1 minute
As result of run dropwizard application I can see:
has an error:
* Unrecognized field at: database
Did you mean?:
- metrics
- server
- logging
In addition to code given by dropwizard example you need to add a setter for database property.
#Valid
#NotNull
#JsonProperty("database")
private DataSourceFactory database = new DataSourceFactory();
public DataSourceFactory getDataSourceFactory() {
return database;
}
public void setDatabase(DataSourceFactory database) {
this.database = database;
}
In your application configuration java file, you have to add the matching property for "database". If the properties you're specifying are the standard ones (which they look to be, good!) then you can keep with the DataSourceFactory type:
public class ExampleConfiguration extends Configuration {
#Valid
#NotNull
#JsonProperty
private DataSourceFactory database = new DataSourceFactory();
public DataSourceFactory getDataSourceFactory() {
return database;
}
public void setDatabase(DataSourceFactory database) {
this.database = database;
}
}
Example here: http://www.dropwizard.io/0.9.0/docs/manual/jdbi.html