Command side events are getting processed but query (projector) is not invoked.
Using axon kafka extension 4.0-RC2.
Please check below code reference.
AxonConfig
import org.springframework.context.annotation.Configuration;
#Configuration
public class AxonConfig {
}
application.yml
server:
port: 9001
spring:
application:
name: Query Application
datasource:
url: jdbc:postgresql://localhost:5441/orderdemo
username: orderdemo
password: secret
driver-class-name: org.postgresql.Driver
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQL95Dialect
jdbc:
lob:
non_contextual_creation: true
hbm2ddl.auto: update
implicit_naming_strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
physical_naming_strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
axon:
eventhandling:
processors:
query:
mode: tracking
source: kafkaMessageSource
kafka:
default-topic: axon-events
consumer:
group-id: query-group
bootstrap-servers: localhost:9092
For this configuration to work, the classes which contain the #EventHandler annotated functions you want to be called for handling the events from Kafka, needs to be part of the processing group query.
This requirement follows from the configuration pattern you've chosen, where "axon. eventhandling.processors.query" defines the Processing Group you want to configure. To specify the Processing Group, I think the easiest approach is to add the #ProcessingGroup annotation to your Event Handling Class. In the annotation, you have to provide the name of the Processing Group, which needs to correspond with what you've set int he configuration file.
Lastly, I would suggest to use a different name than query for your Processing Group. Something more specific to the query model that Event Handler updates would seem more in place to me.
Hope this helps!
Related
I am trying to export opentelemetry metrics to open search.
My configurations are as mentioned below
metrics-pipeline:
source:
otel_metrics_source::
processor:
- otel_metrics_raw_processor:
sink:
- opensearch:
hosts: ["https://<domain-name>:443"]
insecure: true
username: "username"
password: "password
I was going through one of the data prepper issue and came to know that metrics support is included recently.
https://github.com/opensearch-project/data-prepper/issues/242
I am not able to find proper documentation on this.
In the data prepper pod, getting the below exception
com.amazon.dataprepper.model.plugin.NoPluginFoundException: Unable to find a plugin named 'otel_metrics_source:'. Please ensure that plugin is annotated with appropriate values.
at com.amazon.dataprepper.plugin.DefaultPluginFactory.lambda$getPluginClass$2(DefaultPluginFactory.java:111) ~[data-prepper.jar:1.5.1]
at java.util.Optional.orElseThrow(Optional.java:401) ~[?:?]
at com.amazon.dataprepper.plugin.DefaultPluginFactory.getPluginClass(DefaultPluginFactory.java:111) ~[data-prepper.jar:1.5.1]
at com.amazon.dataprepper.plugin.DefaultPluginFactory.loadPlugin(DefaultPluginFactory.java:62) ~[data-prepper.jar:1.5.1]
Appreciate any inputs on this.
Currently, there is no section on the opentelemetry metrics support in the general documentation of DataPrepper. You can find documentation within the respective plugin directories:
otel-metrics-source
otel-metrics-raw-processor
There is also a blog post on the OpenTelemetry metrics ingestion with DataPrepper in the OpenSearch blogs. It contains a configuration example.
Just remove the extra colon at the end of otel_metrics_source , set ssl flag to false and add the index in opensearch section.
Thank you #Karsten Schnitter for your help.
Updated configurations
metrics-pipeline:
source:
otel_metrics_source:
ssl: false
processor:
- otel_metrics_raw_processor:
sink:
- opensearch:
hosts: ["https://<domain-name>:443"]
insecure: true
username: "username"
password: "password
index: metrics-otel-v1-%{yyyy.MM.dd}
Below is how im trying to add a custom fiels name in my filebeat 7.2.0
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
processors:
- add_fields:
fields:
application: oasis
and with this, im expecting a new field called application whose data entries will be 'oasis'.
But i dont get any.
I also tried
fields:
application: oasis/'oasis'
Help me with this.
If you want to add a customized field for every log, you should put the "fields" configuration in the same level of type. Try the following:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
fields.application: oasis
There are two ways to add custom fields on filebeat, using the fields option and using the add_fields processor.
To add fields using the fields option, your configuration needs to be something like the one below.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
fields:
custom_field: 'custom field value'
fields_under_root: true
To add fields using the add_fields processor, you can try the following configuration.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
processors:
- add_fields:
target: ''
fields:
custom_field: 'custom field value'
Both configurations will create a field named custom_field with the value custom field value in the root of your document.
The fields option can be used per input and the add_fields processor is applied to all the data exported by the filebeat instance.
Just remember to pay attention to the indentation of your configuration, if it is wrong filebeat won't work correctly or even start.
I've a 3 front-end application and 3 back end application, Let us say 1 Virtual Machine hosts both front-end and back end application as shown in below diagram, Each front-end application connects to back end using discovery client powered by zookeeper.
Now I want to create network affinity or zone such that FE1 connects to BE1 if available, if BE1 is down connect to BE2/BE3. Can this be achieved in spring-cloud-zookeeper?
Though this can be done using eureka, but I would prefer to do it using zookeeper.
EDIT
Ok in eureka we can set the zone field and ribbon can do zone affinity in client based on zone field retrieved from eureka for each server. The issue is in zookeeper though ribbon uses the same zonepreference filter but it since zookeeper does not pass the zone info, it always remains UNKNOWN, hence zone filtering is not applied.
As workaround what I tried is pass zone info as metadata while registering service as shown below.
spring:
application:
name: kp-zk-server
cloud:
zookeeper:
discovery:
metadata:
zone: default
Now in client create ribbon configuration as retrieve the zone info from metadata as filter as shown below.
#Configuration
public class DefaultRibbonConfig {
#Value("${archaius.deployment.zone:default}")
private String zone;
private Predicate<Server> filter = server -> {
if (server instanceof ZookeeperServer) {
ZookeeperServer zkServer = (ZookeeperServer) server;
String str = zkServer.getInstance().getPayload().getMetadata().get("zone");
return zone.equals(str);
}
return true;
};
#Bean
public ServerListFilter<Server> ribbonServerListFilter(IClientConfig config) {
return new ServerListFilter<Server>() {
#Override
public List<Server> getFilteredListOfServers(List<Server> servers) {
List<Server> selected = servers.stream().filter(filter).collect(Collectors.toList());
return selected.isEmpty() ? servers : selected;
}
};
}
}
boostrap.yml
archaius:
deployment:
zone: Zone1
spring:
application:
name: kp-zk-consumer
cloud:
zookeeper:
dependency:
enabled: true
resttemplate:
enabled: false
discovery:
enabled: true
default-health-endpoint: /actuator/health
dependencies:
kWebClient:
path: /kp-zk-server
loadBalancerType: ROUND_ROBIN
required: true
#ribbon:
# NIWSServerListFilterClassName: io.github.kprasad99.zk.KZoneAffinityServerFilter
Problem
Now the problem is my custom filter class is not being enabled/used, ribbon is still using the default zone filter, if I define the configuration using #RibbonClients
#RibbonClients(defaultConfiguration = DefaultRibbonConfig.class)
However, if I declare using ribbon.NIWSServerListFilterClassName the filter is not applied, but in this case I cannot set the zone property, need to hardcode the zone property.
As far as I know this isn't possible with Zookeeper out of the box.
However, you could achieve the same result by using spring-cloud-loadbalancer and a custom ServiceInstanceSupplier which extends DiscoveryClientServiceInstanceSupplier and filters the instances based on given metadata that has been set, or return the complete list of discovered instances if none matched the criteria to provide you some fallback.
This is a generic solution that could solve your question even if you're running in the same datacenter for example.
Hope this helps!
I can't figure out if spring-cloud-gateway supports Route reading from consul registry, like it is with Zuul.
I added spring-cloud-starter-consul-discovery dependency and #EnableDiscoveryClient, and configured consul properties in application.yml, hovewer, /actuator/gateway/routes doesn't show any routes from consul
I also tried to set spring.cloud.gateway.discovery.locator.enabled: true but doesn't changed anything.
Sample excample below:
spring:
cloud:
consul:
discovery:
register: false
locator:
enabled: true
acl-token: d3ee84e2-c99a-5d84-e4bf-b2cefd7671ba
enabled: true
so the main question, is it even suppose to work?
EDIT: Probably should have mentioned it is version 2.0.0.M5., with Spring Boot 2.0.0.M7
Also I launched with --debug and there is this line:
GatewayDiscoveryClientAutoConfiguration#discoveryClientRouteDefinitionLocator:
Did not match:
- #ConditionalOnBean (types: org.springframework.cloud.client.discovery.DiscoveryClient; SearchStrategy: all) did not find any beans of type org.springframework.cloud.client.discovery.DiscoveryClient (OnBeanCondition)
Matched:
- #ConditionalOnProperty (spring.cloud.gateway.discovery.locator.enabled) matched (OnPropertyCondition)
I could solve it declaring the following bean: DiscoveryClientRouteDefinitionLocator (reference)
#Configuration
#EnableDiscoveryClient
public class AutoRouting {
#Bean
public DiscoveryClientRouteDefinitionLocator discoveryClientRouteDefinitionLocator(DiscoveryClient discoveryClient, DiscoveryLocatorProperties properties) {
return new DiscoveryClientRouteDefinitionLocator(discoveryClient, properties);
}
}
P.S: You need to include "spring-cloud-consul"
I have a ConfigServer, very basic:
#EnableConfigServer
#SpringBootApplication
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
I'm using spring-cloud-config-server:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
And I expect it to work the same when deployed to Pivotal Web Services as when I run it locally.
I deployed my configs to a public server with encrypted keys:
spring:
cloud:
config:
server:
git:
uri: https://mypublic.domain/gitbasedconfig
And in my bootstrap.yml, application.yml I have a property with the key:
encrypt:
key: my.super.secret.symmetric.key
This all works locally:
curl http://localhost:8888/myservice/default
responds with all of my encrypted passwords decrypted properly.
When I deploy the same artifact to PWS with the following manifest.yml:
---
applications:
- name: myservice
memory: 384M
disk: 384M
buildpack: java_buildpack
path: target/myservice.jar
env:
ENCRYPT_KEY: my.super.secret.symmetric.key
If I deploy with or without the env->ENCRYPT_KEY neither work. When I call the service, all of my encrypted keys are returned as
invalid.my.key.name: "<n/a>",
In the PWS logs I can see this:
Fri May 20 2016 13:26:21 GMT-0500 (CDT) [APP] OUT {"timeMillis":1463768781279,"thread":"http-nio-8080-exec-4","level":"WARN","loggerName":"org.springframework.cloud.config.server.encryption.CipherEnvironmentEncryptor","message":"Cannot decrypt key: my.key.name (class java.lang.IllegalArgumentException: Unable to initialize due to invalid secret key)","endOfBatch":false,"loggerFqcn":"org.apache.commons.logging.impl.SLF4JLocationAwareLog","contextMap":[],"source":{"class":"org.springframework.cloud.config.server.encryption.CipherEnvironmentEncryptor","method":"decrypt","file":"CipherEnvironmentEncryptor.java","line":81}}
When I look at the http://myservice.on.pws/env I can see that there are values for encrypt.key in both application.yml, bootstrap.yml and I can also see the environment value. These are all the same value.
Why are my encrypted values not being decrypted properly when I'm providing the symmetric key value in both the properties files and/or the environment? Is there some other property that I need to add to make this work on PWS? The non-encrypted values are working properly within the same configs, so everything is wired properly. It's just the encrypted values that are not working.
I think that Spencergibb and Vinicius Carvalho were both correct.
The Java Cryptopgraphy Extensions can't be distributed with the standard java buildpack.
The Pivotal Support site provided a possible solution which is to fork the javabuildpack and update it to include the proper permissions for JCE. The deploy the application with the custom buildpack. One caveat is that you/I won't get the automatic updates.
https://support.run.pivotal.io/entries/76559625-How-do-I-use-the-JCE-Unlimited-Strength-policy-with-my-Java-app-